Ex Nunc Intelligence targets legal AI’s credibility crisis
Swiss startup Ex Nunc Intelligence has raised $2.15 million in fresh funding to tackle one of the most pressing issues facing the digital justice system: the lack of trust in legal AI. As courts, law firms and in-house counsel experiment with AI tools to speed up research and decision-making, concerns around reliability, transparency and accountability are intensifying. The new capital will help the company build technology that makes AI-driven legal reasoning traceable, auditable and safe enough for high‑stakes use.
Why legal AI is struggling to earn trust
Over the past two years, generative AI models have rapidly entered the legal sector, promising faster drafting, cheaper research and automated document analysis. Yet several high‑profile incidents of fabricated case citations and opaque reasoning have made judges, regulators and clients wary of over‑reliance on so‑called “black box” systems.
Legal work is inherently high‑risk: a flawed argument, a misinterpreted precedent or an undisclosed conflict can lead to lost cases, financial damage or even wrongful convictions. Many current AI assistants are trained on broad internet data, struggle with jurisdiction‑specific nuances and provide answers without clear evidentiary grounding. For lawyers bound by strict ethical and professional standards, this is unacceptable.
This is the trust gap Ex Nunc Intelligence is positioning itself to close. Rather than focusing on pure speed or automation, the startup is building infrastructure that keeps humans firmly in control while exposing every step of an AI‑supported legal analysis.
Ex Nunc Intelligence’s approach: explainable, auditable AI
The company’s core proposition is that legal AI must be designed from the ground up for explainability, traceability and compliance. Its platform is being developed to sit as a secure layer between legal professionals and underlying AI models, including large language models.
Source‑linked reasoning rather than opaque answers
Instead of generating free‑form text that lawyers are expected to trust at face value, Ex Nunc Intelligence is building systems that anchor every statement to verifiable sources such as statutes, case law, regulations and authoritative commentary. Draft analyses produced through the platform are expected to include:
- Clear citation trails from each legal proposition to primary or secondary sources
- Machine‑readable logs of how the AI algorithms traversed and weighed those sources
- Configurable jurisdictional filters to avoid cross‑contamination of legal systems
This design aims to let lawyers treat AI output like a junior associate’s memo: a starting point that is fully reviewable, challengeable and grounded in evidence.
Human‑in‑the‑loop controls for high‑stakes decisions
Recognising that legal responsibility cannot be delegated to machines, the platform is being built with mandatory human‑in‑the‑loop workflows. Draft arguments, risk assessments or contract analyses generated with the help of AI must be explicitly reviewed and approved by a qualified professional before they are finalised or shared.
By embedding these controls, Ex Nunc Intelligence aims to align with emerging AI regulation in Europe, including the EU AI Act, which classifies many legal applications as high‑risk and requires robust oversight, documentation and risk management.
Addressing regulatory and ethical pressure
Regulators and professional bodies across Europe and beyond are moving quickly to set boundaries on how AI systems can be used in legal processes. Courts have begun issuing practice directions on AI‑assisted filings, and bar associations are updating codes of conduct to address confidentiality, bias and competence in the use of digital tools.
In this environment, law firms and corporate legal departments are looking for solutions that do more than plug generic chatbots into their workflows. They need infrastructure that supports:
- Data protection and strict control over sensitive case files
- Detailed audit trails for every AI‑assisted step in a matter
- Configurable risk thresholds and approval chains
- Alignment with internal policies and external regulatory requirements
By focusing on these structural needs, Ex Nunc Intelligence is positioning itself not merely as a productivity tool but as a compliance‑grade layer for the next generation of legal technology.
Funding to accelerate product and market expansion
The $2.15 million funding round will be used to expand the company’s engineering team, deepen its legal domain expertise and pilot the platform with early customers across Europe. The startup is expected to prioritise partnerships with mid‑ to large‑sized law firms, legaltech providers and corporate legal teams that are already experimenting with AI‑driven workflows but are constrained by internal risk committees and client expectations.
Part of the capital is also likely to be directed toward building integrations with leading document management systems, knowledge bases and case management platforms, making it easier for legal organisations to adopt the technology without overhauling existing infrastructure.
Competitive landscape and strategic positioning
The legaltech market has seen a surge of startups offering generative drafting tools, contract review assistants and AI‑powered research engines. However, many of these solutions compete primarily on speed and user interface, leaving a gap for providers that can demonstrate rigorous governance and risk management.
By centring its value proposition on trust, transparency and regulatory alignment, Ex Nunc Intelligence is carving out a defensible niche. Rather than replacing existing tools, its technology can act as a control layer that validates, enriches or constrains outputs from various underlying AI models, whether proprietary or third‑party.
This strategy also positions the company to benefit from a broader shift: as clients and regulators demand proof of how AI‑assisted legal work is produced, firms will need detailed logs, structured explanations and standardised reporting. The startup’s focus on explainable AI and evidentiary chains speaks directly to those emerging requirements.
What this means for the future of digital justice
The rise of AI in law is no longer a theoretical debate. Courts are already seeing AI‑drafted submissions; in‑house teams are quietly using AI to triage contracts; and legal publishers are embedding models into their research platforms. The question now is not whether AI will be used, but under what conditions and with which safeguards.
By directing fresh capital into technology that prioritises verifiability over velocity, Ex Nunc Intelligence is betting that the winning legal AI solutions will be those that can stand up in front of a judge, a regulator or a disciplinary committee. If the company can deliver on its promise of transparent, auditable and compliant AI workflows, it may help shift the conversation from whether legal AI can be trusted to how it should be governed.
For a sector built on precedent, evidence and accountability, that shift could mark a decisive step toward a more credible and robust era of digital justice.


1 Comment
It’s great to see startups tackling the trust issues around legal AI—transparency and accountability are definitely key if these tools are going to be widely adopted. Hopefully this funding helps develop solutions that make AI decisions easier to understand and verify, especially in such high-stakes environments.