Law firms are spending millions on AI tools that cannot answer basic questions about their own cases. The problem is not the AI. The problem is that nobody taught the AI how law firms actually work.
Every legal tech vendor is selling AI-powered something. AI research. AI document review. AI contract analysis. AI case prediction. Firms are buying these tools, running pilots, and getting mediocre results. Then they blame the technology or conclude that AI is overhyped.
They are diagnosing the wrong problem. The issue is not which AI model to buy. The issue is that legal work happens in fragmented silos that no AI can penetrate. Without solving that problem first, every AI tool you buy will underperform.
This is a story about context engineering. And if you have never heard that term before, you need to pay attention. It is the difference between AI that transforms your practice and AI that becomes shelfware.
What Context Engineering Actually Means
Context engineering is the discipline of organizing information so AI systems can understand relationships, dependencies, and meaning across an entire domain. It is not about the AI model itself. It is about the architecture that feeds the model.
Think about how a senior partner approaches a complex case. They do not just read one document. They understand how that document relates to the procedural posture, the applicable law, the client's business objectives, the judge's track record, and opposing counsel's likely strategy. They see the whole picture because they have spent years building a mental model of how these pieces connect.
AI needs the same kind of structured understanding. But here is what most firms give their AI instead: a document management system that stores files in folders, an email system with no connection to case data, a practice management system that tracks tasks but not context, and a research platform that knows nothing about the specific matter. Then firms wonder why their AI cannot provide useful analysis.
A lawyer asks the AI: "What is our best argument on the motion to dismiss?" The AI cannot answer that question well because it cannot see the complaint, the relevant case law for this jurisdiction, the judge's prior rulings on similar motions, or the client's priorities. It can only work with whatever fragment of information lives in the single system where the AI lives.
This is like asking a first-year associate to draft a brief but only giving them access to one exhibit. Then being disappointed when the brief is terrible.
Why Legal AI Keeps Failing (And Will Keep Failing)
The legal industry has a dirty secret about AI adoption. It is not working. MIT researchers found that 95 percent of companies running AI pilots saw little or no return on investment. Financial analysts project an $800 billion revenue shortfall for AI companies by 2030.
Law firms are experiencing this firsthand. They pilot an AI research tool that still requires lawyers to manually verify everything. They try AI document review that misses critical issues because it lacks context about the case strategy. They implement AI contract analysis that cannot answer questions about how this contract relates to the client's other agreements.
The response is predictable: firms blame the vendor, switch to a different AI tool, and get similar results. Then they conclude AI is not ready for legal work.
Wrong diagnosis. The AI is fine. The infrastructure is broken.
Here is what is actually happening. Law firms built their technology stacks like a teenager's bedroom. They threw tools into the pile whenever they needed something. One system for case management. Another for document storage. A third for time tracking. A fourth for client communication. A fifth for research. A sixth for billing. Maybe a seventh for conflicts checking.
These systems do not talk to each other. Data gets manually copied between platforms. Information lives in silos. Lawyers spend hours reconstructing context that should be immediately available.
Now firms are trying to add AI to this mess. They are shocked when it does not work miracles.
What Actually Works: The AlphaFold Lesson
This brilliant NYT’s article (gifted) by NYU Professor Gary Marcus is must read for anyone trying to figure out the future of legal technology. Professor Marcus discusses two important examples of the difference between specialized AI and general chatbots: AlphaFold and Waymo.
While legal tech vendors were building AI chatbots, Google DeepMind was revolutionizing biology. Their AlphaFold system can predict protein structures with remarkable accuracy. It has analyzed over 200 million proteins. Scientists use it routinely for drug development. Its creators won a Nobel Prize.
AlphaFold is not a general-purpose AI that learned biology from scratch. It is a specialized system built with deep knowledge of how proteins work. The developers engineered the architecture around amino acid sequences, folding patterns, and molecular relationships. They gave the AI context before asking it to learn.
The result: AlphaFold solves one problem extremely well because it was designed specifically for that problem with proper context built in.
Compare that to legal AI tools. Most are general-purpose language models with some legal content thrown at them. They were not architected around how legal matters actually work. They do not understand the relationship between a complaint and a motion, between a contract clause and a business risk, between a case citation and a litigation strategy.
Marcus discusses the same pattern appears in autonomous vehicles. Waymo built specialized systems with components for object detection, sensor integration, and decision-making. The AI understands driving because the system was engineered to provide driving context. Ghost Autonomy tried to use general-purpose AI for self-driving cars, raised over $200 million, and failed.
The lesson is clear: context-specific architecture matters more than the underlying AI model.
The Single Pane of Glass Is Not Marketing
Legal tech companies keep talking about "single pane of glass" solutions. This sounds like vendor marketing. It is not. It is the only way to make AI actually work for legal work.
Ryan Anderson at Filevine argued this at Lex Summit. Clio made the same case at their conference. These are not small companies with nothing to lose. They are market leaders who understand that AI without context is just expensive autocomplete.
Here is why the single pane of glass matters. When all case information lives in one connected system, the AI can see relationships. It knows that this email relates to that motion, which connects to these contract terms, which implicate that case law, which matters because of the judge's track record.
A lawyer asks: "What are the risks if we take this position on summary judgment?" An AI with proper context can analyze the legal arguments, review the procedural history, consider the judge's tendencies, evaluate the strength of the evidence, and assess how this decision impacts the overall case strategy.
An AI without context can only summarize the legal standard for summary judgment, which any first-year law student could do.
The difference is not the AI model. Both might use Claude or GPT-5 or Gemini. The difference is the architecture. One AI has context. The other is flying blind.
What This Means for Your Firm (Right Now)
Most firms are approaching AI exactly backwards. They are asking: "Which AI vendor should we choose?" The better question is: "How do we structure our information so any AI can be useful?"
This requires uncomfortable decisions. It means consolidating systems instead of adding more point solutions. It means standardizing processes instead of letting every practice group do their own thing. It means investing in data architecture instead of flashy AI features.
Most firms will not do this work. They will keep buying disconnected AI tools and wondering why their expensive investments never deliver results. They will run more pilots. Test more vendors. Attend more conferences. And continue getting mediocre outcomes.
A small number of firms will take the harder path. They will treat AI as infrastructure, not software. They will build systems where information flows between functions without manual intervention. They will create platforms where an AI can see the full context of a legal matter.
Those firms will have an insurmountable advantage. Not because they picked the right AI model, but because they built the architecture that makes any AI model effective.
Stop Buying AI. Start Building Context.
The legal industry is obsessed with artificial general intelligence and whether ChatGPT will replace lawyers. This is the wrong conversation.
The real question is whether your firm will build the infrastructure to make AI useful. That infrastructure is context engineering. It is not sexy. It does not make for good conference demos. It requires actual work instead of just signing vendor contracts.
But it is the only thing that will make your AI investments pay off.
You can keep buying point solutions with AI features bolted on. You can keep running pilots that show modest improvements. You can keep waiting for the AI to get better.
Or you can recognize that the AI is already good enough. What is missing is the foundational architecture to make it effective for legal work. Law firms that effectively implement AI that works in a single pain of glass will have a massive advantage.
Every firm talks about digital transformation. Most mean "buying new software." The firms that will win are the ones who understand that transformation means rebuilding how information flows through the organization.
Context engineering is not a feature roadmap. It is a strategic decision about whether your firm will be structured for AI or against it. The technology exists. The question is whether you will do the work to make it valuable.
Your competitors are making the same choice right now. Some will build the infrastructure. Most will not. In five years, the gap between those two groups will be unbridgeable.
Which side will you be on?