In remarks to the Westminster Legal Policy Forum on Thursday 26 February, Sir Brian Leveson framed AI as a practical lever for tackling criminal court delay – but only with tight safeguards, improved technical capability across the system, and a preference for closed tools.
Sir Brian Leveson used his Westminster Legal Policy Forum speech to connect two strands of the criminal courts debate that are often treated separately: the scale of delay, and the operational role of technology in reducing it. The remarks were presented as a companion to his Independent Review of the Criminal Courts (Parts 1 and 2), which set out 180 recommendations, including 31 focused on technology and AI as enablers of reform. With the Crown Court backlog “close to 80,000” cases, Leveson described a system in which victims and defendants wait so long that confidence falls away, witnesses disengage, and some defendants “game the system” in the hope cases will collapse under their own weight.
Against that backdrop, he argued that government must pull “every single lever” available: structural reform, investment in capacity, and a focus on efficiency across policing, prosecution, defence, courts and prisons. Technology – and AI in particular – featured as an enabler that can be applied across the full lifecycle of a case.
At the same time, Leveson was careful to set boundaries. AI adoption should not be “unfettered”. Safeguards are needed to ensure the use of AI strengthens, rather than undermines, core principles of justice. His headline principle was simple and repeated: AI should augment human decision-making, not replace it.
Context
Leveson attributed delay to familiar drivers – long-term funding constraints, COVID disruption and industrial action – but emphasised an accelerating factor that is now shaping casework in practical terms: the growth and complexity of digital evidence. The “explosion of digital data evidence” has changed what investigators collect, what prosecutors must review, what the defence must search, and how disclosure is managed.
That reality, in his view, demands a shift away from technology as an “add-on”. But he also highlighted a persistent obstacle: fragmentation. The police, CPS, defence community, HMCTS, HMPPS and the judiciary all face different pressures and often deploy bespoke IT solutions. Without interoperability, staff end up manually transferring information between systems, creating cost, delay and operational risk.
Analysis A. AI in the early stages of a case – evidence preparation, redaction and search
Leveson's most concrete AI examples sit early in the process. He described a role for technology in reducing police bureaucracy, supporting preparation of evidence, and assisting with redaction and file-building. He also pointed to tools that can support CPS decision-making and disclosure, including searches across unused material. He acknowledged that modern cases can involve terabytes of data, with both prosecution and defence facing the same basic problem: time spent locating what matters.
The implication is not that AI solves evidential judgment. Rather, it compresses the mechanical steps around review and retrieval, helping cases come to court ready to be heard.
B. Digital case management – listing, dashboards and case progression
A second cluster of recommendations focused on listing and case progression. Leveson supported a national listing framework in the Crown Court and argued it needs a digital case management system “underneath” it – enabling real-time dashboards and scheduling within a common platform. He welcomed the announced pilot of a digital listing tool in Preston and Isleworth.
In the same vein, he recommended a digital, interactive case progression system for the magistrates' and Crown Courts to support effective progression and accountability for compliance with directions. The underlying point was that automation should free up HMCTS staff for the work technology cannot – and should not – do.
C. Courtroom process – remote participation, transcription and interpretation
Leveson also focused on using court time better. He advocated greater remote participation, with preliminary Crown Court hearings defaulting to a judge in court and other parties appearing remotely, and with certain professional witnesses (for example, police officers not giving core evidence) attending remotely by default. He framed this as a practical response to wasted hours in court and to transport constraints as remand populations rise.
He also highlighted AI-enabled transcription, and AI for interpretation particularly where an interpreter is not available, alongside a move away from paper through digital jury bundles to reduce delay and reprinting.
D. The sceptical edge – AI literacy, governance and closed systems
The Q&A section made the governance theme more explicit. Asked whether HMCTS has the internal expertise to identify new ways of using technology, Leveson pointed to the failure of HMCTS Common Platform as a supposed do-everything system and said it made him a “great fan of APIs” – linking systems so they can work together. He warned about over-commitment to tools that do not talk to each other or contain hard-to-diagnose glitches, and he referenced the Post Office Horizon experience as a cautionary tale.
On AI literacy, Leveson treated capability as an operational risk. He supported the idea of a wider “crack AI task force” across the system. He also drew a clear line around tool design: he was not suggesting the courts should use AI “linked to the internet” for tasks such as preparing case summaries. Instead, he envisaged a closed system in which the AI receives the evidence and produces a summary quickly, with human checking and accountability. He cited other established use cases – including AI used in tribunals and tested applications in other sectors – as evidence that valuable automation is possible without loosening safeguards.
Conclusion
Leveson's emphasis on AI was pragmatic rather than speculative. He presented technology as a way of relieving operational pressure. But his most pointed observations were sceptical ones. Without coordinated governance, interoperable systems, and baseline AI literacy across agencies, the system risks repeating past technology failures at the moment it can least afford them. Leveson was clear about the role of AI: it is not a futuristic add-on but a core aspect of future court systems, designed to support, not substitute, legal decision-making.
[View source.]