The Promise and Perils of Using AI for Legal Research

Esquire Deposition Solutions, LLC
Contact

Esquire Deposition Solutions, LLC

By now most litigators know that generative artificial intelligence is a two-edged sword. While the ethical duty of technology competence arguably requires litigators to consider using artificial intelligence technologies for client matters, that same ethical obligation demands extreme vigilance when using what are, in fact, emerging and untested practice tools.

To prove the point, we recently highlighted four cases in which litigators submitted briefs to the courts that contained inaccurate case citations generated by artificial intelligence. Some lawyers escaped sanctions, but some did not.

While GAI tools may be able to significantly assist lawyers in serving clients, they cannot replace the judgment and experience necessary for lawyers to competently advise clients about their legal matters or to craft the legal documents or arguments required to carry out representations.

The Promise of Artificial Intelligence

Missteps notwithstanding, there’s great enthusiasm for using artificial intelligence for client matters within the legal profession. And for good reason. Not only does artificial intelligence hold out the promise of delivering higher quality legal services, it also promises to do so more efficiently and with fewer, but more productive, lawyers and supporting staff. In the area of legal research alone, AI-supported tools potentially provide lawyers with the following advantages over traditional legal research methods:

  • Surface more relevant cases and secondary materials than traditional keyword searches
  • Summarize legal materials – including deposition transcripts – in a fraction of the time lawyers currently spend with manual review
  • Identify missing or weak citations in an opposing party’s pleadings and supporting briefs
  • Automatically check case cites, pointing out overruled decisions or disapproved analyses
  • Predict how courts may rule on legal arguments raised in dispositive motions
  • Automate repetitive legal research tasks such as tracking changes in relevant case law and monitoring new litigation
  • Use natural language processing technologies that “understand” user queries even when the user employs search terms that aren’t necessarily those used in relevant legal materials
  • Continuously improve, learning from user interactions and benefitting from rapid technological innovation

The case for artificial intelligence in legal research is compelling. The benefits to lawyers and clients can’t be overlooked.

The Perils of Artificial Intelligence

However, as the American Bar Association (along with 40 other state bar regulators) has advised, ethical litigators must consider both the “benefits and risks” associated with technology used for client matters. And in the area of risks, there are numerous ethical risks currently associated with generative artificial intelligence that lawyers must be aware of. Technology competence is one large risk, of course. But artificial intelligence in litigation raises a host of other ethical concerns that must be addressed: risks to client confidential information, the need to communicate the use of artificial intelligence to clients, and the duty of candor to the court when presenting legal arguments based on materials generated by AI tools – to name just a few ethical concerns.

Just last month, researchers at Stanford University’s Regulation, Evaluation, and Governance Lab released a study addressing the accuracy of several leading AI-supported legal research tools. The results were, in a word, sobering.

Among the problems they detected:

  • While AI legal research tools from commercial legal vendors outperformed general-purpose tools such as ChatGPT, the law-specific commercial offerings all contained “hallucinations” and, in the case of one prominent vendor, more than one-third of its responses contained a hallucination
  • AI legal research tools struggle with elementary legal comprehension, often misdescribing case holdings, failing to distinguish between the arguments of a litigant and the holding of the court, and failing to respect the hierarchy of legal authority
  • AI legal research tools often generate statements of law that do not exist
  • AI legal research tools often fail to find the most relevant sources
  • AI legal research tools often provide inapplicable legal authority
  • AI legal research tools often display “sycophancy,” described by the researchers as a tendency to agree with the user even when the user is mistaken
  • AI legal research tools tend to make elementary errors of reasoning and fact.

The results returned by the legal research tools studied “seemed relevant and helpful,” the researchers noted, but they were not always correct factually or legally.

So do these researchers argue against using the new wave of artificial intelligence-supported legal research tools? No, far from it. But these tools should be used with great care, they say. “Even in their current form, these products can offer considerable value to legal researchers compared to traditional keyword search methods or general-purpose AI systems, particularly when used as the first step of legal research rather than the last word,” they conclude.

This advice largely aligns with the American Bar Association’s guidance in Formal Opinion 512 (Generative Artificial Intelligence Tools). In July 2024, the ABA’s Standing Committee on Ethics and Professional Responsibility wrote:

Because GAI tools are subject to mistakes, lawyers’ uncritical reliance on content created by a GAI tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties. Therefore, a lawyer’s reliance on, or submission of, a GAI tool’s output — without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation as required by Model Rule 1.1. While GAI tools may be able to significantly assist lawyers in serving clients, they cannot replace the judgment and experience necessary for lawyers to competently advise clients about their legal matters or to craft the legal documents or arguments required to carry out representations.

The Stanford researchers also called on legal research vendors to provide “hard evidence” of reliability claims they’re making about their AI-supported tools, which they said is currently lacking.

The article Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools, was released April 25, 2025; it will be officially published in the June 2025 issue of the Journal of Empirical Legal Studies.

Conclusion: Mind Both the Benefits and Risks

Clearly, artificial intelligence for legal research is both a promising and inevitable feature of modern litigation. Just as clearly, as we’ve seen from scholarly research and actual experience in the courts, artificial intelligence tools should be deployed with great care and a full appreciation of their current shortcomings. As Sergeant Phil Esterhaus was fond of saying in television’s police drama Hill Street Blues, “Let’s be careful out there.”

Written by:

Esquire Deposition Solutions, LLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Esquire Deposition Solutions, LLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide