[co-author: Melody Stephen]
The Westminster Legal Policy Forum conference on 26 February 2026 brought together speakers from across the justice system – spanning policy, judicial, practitioner and user perspectives, and covered a wide range of practical ideas.
At the Westminster Legal Policy Forum conference on 26 February 2026, the dominant theme was urgency. The justice system is dealing with backlogs, rising complexity and an evidential environment shaped by digital data at scale. Against that backdrop, AI was framed less as an innovation project and more as a set of tools that will increasingly sit inside the operating model of courts and justice agencies.
That came through in contributions ranging from Sir Brian Leveson's keynote to practitioner and user perspectives from speakers such as Tana Adkin KC (former Chair of the Criminal Bar Association), Jim Taylor (Chair of the Policy and Reform Committee, Civil Court Users Association) and Mike Mack (Panel Member of the Legal Services Consumer Panel), alongside vendor perspectives shared by Kirsten Maslen (Senior Director, Growth and Commercial Strategy, Thomson Reuters).
For corporates – as frequent litigants, defendants, witnesses and active users of the civil justice system – these developments matter in a practical way. Efficiency reforms can speed up the justice system and change what its users expect from it. Access-to-justice measures can change claim volumes and user behaviour. And AI-related integrity risks can create new disputes about evidence, disclosure and costs.
Context and background
Sir Brian Leveson’s remarks set the tone. As the first speech of the day, it also functioned as a companion piece to his Independent Review of the Criminal Courts. He described a system under acute strain, pointing to a Crown Court backlog of nearly 80,000 cases and the broader drivers of delay: under-resourcing, post-COVID disruption, and the explosion of digital material. His prescription was not “AI everywhere”, but a programme of reform in which technology is integral from start to finish – from policing and file preparation through to listing, case progression and courtroom administration – with clear principles of fairness, transparency, accountability and human oversight.
That “whole system” framing resurfaced throughout the day. Speakers stressed that reforms fail when they encounter predictable failure points: a failure to move to a genuinely paperless world; a lack of interoperability between systems; and failures of accountability when new tools generate new workloads without clear ownership and escalation routes.
Analysis
- AI as an efficiency lever – but only as augmentation
Leveson’s proposals were framed around using technology to support the system rather than substituting human judgment: for example, automating elements of redaction and triage, improving search across unused material, and using digital listing and progression tools to reduce delay and administrative drag. Remote participation was presented as a capacity lever where appropriate, while preserving in-person trials as the default.
Leveson repeatedly returned to a simple principle: AI should augment rather than replace human decision-making. That distinction matters in corporate disputes too. Similar themes ran through contributions from the likes of Kirsten Maslen, who stressed that AI should be used for decision support rather than decision-making, with verification built into the workflow. As AI tools become more common in preparation and case management, judges and parties are likely to want clearer explanations of where AI has been used, what it did, and what checks were applied – particularly where outputs could influence conclusions (summaries, chronologies, issue tagging, or translated material). Whether that level of transparency is always practical is a different question, and one that is currently being worked through in debates and working groups across the justice system.
- Interoperability and joined-up design as a governance issue
A recurring frustration – raised in Leveson’s remarks and picked up by speakers such as Jim Taylor and Dia Thanki (Founder and CEO, Alchemy Machines) – was fragmentation: different agencies and court services building bespoke systems that force manual transfer of data and introduce failure points. Several speakers argued that interoperability is not a technical preference but an operational necessity – and that the Common Platform experience shows the cost of over-promising without integration discipline.
If reform succeeds, large organisations may find the system easier to interact with at scale (including via more structured digital interfaces). If it fails, the same friction will persist – but with new tech layers adding complexity.
- Access to justice and inclusion
Speakers from the policy and consumer perspective – including Jelena Lentzos and Mike Mack, and contributions from the likes of Dr Sarah Stephens (Associate Professor, Legal Innovation, University of Sussex) and Professor Sue Prince (Professor Emeritus, Law School, University of Exeter) – emphasised that digital justice is no longer experimental; it is being built into how cases are started and managed. The point was not simply that more services move online, but that system design determines who can participate and on what terms.
That has a real implication for corporates: if navigation tools, portals and AI-assisted services help individuals and small businesses recognise issues, draft documents, and pursue money claims more cheaply, more disputes will reach formal channels. Further contributions – including from speakers such as Jim Taylor, Mike Mack and Jelena Lentzos – hinted at a “volume effect” already emerging: easier production of applications and complaints, and litigants arriving with arguments drafted by AI. Even where many claims remain low value, they can create material workload and cost when scaled.
At the same time, multiple speakers warned against treating access as purely a measure of speed and case clearance. Digital exclusion remains significant, and vulnerable users may struggle most with unregulated tools that provide confident but wrong answers. That tension – expanding access while avoiding a two-tier system – is likely to shape future rulemaking and standards.
- Trust, evidence integrity and the new satellite disputes
Public confidence was the conference’s centre of gravity – from Tana Adkin KC’s practitioner-focused caution to courtroom insights from Alan Parfery (Advocate Depute, Crown Office and Procurator Fiscal Service), alongside civil practice perspectives from Sophie Mitchell (Barrister, St Pauls Chambers). Concerns focused on predictable fault lines: hallucinated legal authorities, hidden prompt manipulation, confidentiality breaches through public tools, and the rising plausibility of synthetic evidence (including deepfakes).
For corporate litigants, two things follow. First, the authenticity of digital material will become a more routine contested issue – not only in fraud-heavy cases, but anywhere video, audio, messaging or screenshots matter. Secondly, AI may increase the amount of checking work required: verifying authorities, validating translations and summaries, and scrutinising opponent materials that appear AI-generated. The risk is not just bad conduct; it is time and cost migrating from substantive issues into integrity disputes.
Practical takeaways for corporates
- Expect an increase in claims. As portals and legal-tech tools mature, organisations should anticipate higher volumes of claims, complaints and information requests that are easier to generate and more standardised in form.
- Treat evidence governance as crucial. Maintaining a defensible audit trail for key evidence – where it came from, who handled it, and how versions were controlled will matter more when authenticity is questioned.
- Be ready for AI transparency expectations. Even without changes to the rules, parties will press for disclosure of AI use where it could affect reliability (summaries, translations, document review workflows).
- Assume verification remains non-delegable. AI can assist, but responsibility stays with the lawyer and client teams signing off the output.
- Engage with the design of digital systems. As the courts move towards systems that help to run cases online from start to finish, the practical impact will depend on how well those systems work.
Conclusion
The conference was not a sales pitch for AI. It was an argument that justice reform now depends on disciplined, governed deployment of technology – with inclusion and trust treated as key. For corporates, this shift changes the environment in which disputes arise and are processed. A more navigable justice system may broaden participation and increase claim volumes, but a more AI-enabled system will intensify scrutiny of evidence integrity and verification practices. The organisations that cope best will be those that treat these developments as part of litigation readiness and operational resilience, not just as “court modernisation” happening somewhere else.
[View source.]