The Siren’s Song of Generative AI in Pleadings

Esquire Deposition Solutions, LLC
Contact

Esquire Deposition Solutions, LLC

Most uses of artificial intelligence in litigation carry great promise but little risk. That’s not the case with generative AI tools employed to draft legal pleadings. Despite the best efforts of courts and bar groups to promote careful, ethical behavior within the legal community, generative AI tools are continuing to produce sanctionable missteps in litigation. This article highlights a number of very recent cases where litigators have been caught with AI-generated “hallucinations” in their court filings.

Clearly, the use cases for artificial intelligence in litigation are compelling. Artificial intelligence technologies are near-mandatory tools for uncovering relevant data buried in electronically stored information shared during pretrial discovery. Artificial intelligence technologies assist in deposition preparation, instantly translate voice recordings into text, analyze deposition testimony, and uncover connections between deposition testimony and other information in the case file.

Despite the best efforts of courts and bar groups to promote careful, ethical behavior within the legal community, generative AI tools are continuing to produce sanctionable missteps in litigation.

Artificial intelligence tools can analyze contracts, offer predictions of litigation outcomes, hone litigation strategies, help select jurors, and automate routine law office processes. According to survey data in the 2024 Thomson Reuters Future of Professionals Report, artificial intelligence tools can save four hours of time, per lawyer, per week right now. These efficiencies will only increase over time. Not only does artificial intelligence promise cost savings to law firms, they free up lawyer time for the high-level, strategic counseling demanded by sophisticated clients.

However, the risk-reward calculus changes when generative artificial intelligence tools – the technologies that “research” the law and “write” legal pleadings – are involved. Unlike e-discovery and litigation analysis tools, which are deployed mostly behind the scenes, the outputs of generative artificial intelligence technologies are subject to close scrutiny by opposing counsel and by courts.

Under Rule 11 of the Federal Rules of Civil Procedure, an attorney’s signature on, or mere filing of, a pleading with the court “certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances . . . the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law.” Violations of Rule 11 carry the risk of sanctions for lawyers and their firms.

Ethical obligations are also triggered when pleadings built with generative artificial intelligence are filed with courts. Chief among these are the ethical duties of competence and candor with the court.

Here in the United States, the procedural and ethical pitfalls of generative artificial intelligence tools have been known ever since the well-publicized court ruling in Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y., June 22, 2023), an early case imposing sanctions for the careless use of the ChatGPT technology.

Two years later, not everyone has gotten the message.

Despite the efforts of the American Bar Association and many state bar associations to encourage the competent and careful use of generative artificial intelligence by litigators, the misuse of these technologies continues. Dozens of court rulings have also attempted to chart the legal dangers posed by careless use of generative artificial intelligence technologies.

Four Notable Rulings in Recent Months

What follows is a sampling from the numerous recent court rulings involving missteps in the use of generative artificial intelligence in litigation. Lessons for lawyers can be gleaned from each of these cases.

In Bevins v. Colgate-Palmolive Co., No. 25-576 (E.D. Pa., April 10, 2025), an attorney was found to have violated Rule 11 by “submitting briefs … that cited case law that did not support his stated propositions and which, on their face, do not exist.” The court was unconvinced by the attorney’s explanations for the errors. Additionally, the briefs violated the court’s standing order to certify that any research done with the assistance of AI is verified.

For sanctions, the court entered an order (a) striking the attorney’s appearance in the case, (b) informing local state and federal bar regulators of its ruling, and (c) directing the attorney to inform his client of the sanctions order.

The case of Wadsworth v. Walmart Inc., No. 23-cv-118 (D. Wyo., Feb. 24, 2025), is notable for three reasons. First, the hallucinated case citations were generated by AI trained on the law firm’s own database of case materials. Some legal commenters have suggested that training AI tools on domain-specific legal data (as opposed to the public Internet) will promote reliable AI outputs. That didn’t happen in this case.

Second, the law firm – after being informed that its pleading contained citations to non-existent cases – took significant and immediate remedial measures, including:

  • withdrawing motions supported by the AI hallucinations,
  • “being honest and forthcoming about the use of AI,”
  • paying opposing counsels’ fees for defending the motions, and
  • implementing policies, safeguards, and training to prevent another occurrence in the future (and providing proof).

These measures drew praise from the Wadsworth court. In fact, the court remarked, “attorneys should – at the very least – follow these steps to remediate the situation prior to the issuance of any sanction.”

Third, the case a common scenario: Attorneys from a large, national law firm were working with local counsel who actually drafted the pleadings at issue. The national firm’s lawyers did not review the pleadings, instead relying on local counsel’s reputation. The court noted that, had the national firm lawyers reviewed the pleadings, they might have spotted the hallucinations. In any event, it said, these attorneys violated Rule 11 because what happened here — i.e., no inquiry whatsoever — cannot be deemed objectively reasonable. The court stated:

Every attorney learned in their first-year contracts class that the failure to read a contract does not escape a signor of their contractual obligations. … Similarly, one who signs a motion or filing and fails to reasonably inspect the law cited therein violates Rule 11 by its express terms.

Rule 11 requires attorneys to make a reasonable inquiry into the law before signing (or giving another permission to sign) a document. “If an attorney does not do so, then they should not sign the document,” the court said. “However, if the attorney decides to risk not making reasonable inquiry into the existing law and signs, then they may be subject to sanctions.”

For sanctions, the court imposed a monetary fine on the attorney who drafted the pleadings and revoked his pro hac vice admission to the local bar. The national law firm attorneys who had a supervisory role in the case received fines alone.

Mid Cent. Operating Eng’rs Health v. Hoosiervac LLC, No. 24-cv-00326 (S.D. Ind, Feb. 21, 2025), involved three separate pleadings supported by fictitious, hallucinated legal citations. Asked to show cause why he should not be sanctioned, the attorney responsible for the pleadings asserted that he did not make an attempt to verify the citations because, according to the court, “they appeared to be credible.” He argued that his errors were not committed in bad faith.

The court was unsympathetic. “It is one thing to use AI to assist with initial research, and even non-legal AI programs may provide a helpful 30,000-foot view,” the court remarked. “It is an entirely different thing, however, to rely on the output of a generative AI program without verifying the current treatment or validity — or, indeed, the very existence — of the case presented. Confirming a case is good law is a basic, routine matter and something to be expected from a practicing attorney.”

Turning to the Indiana Rules of Professional Conduct, the court held that the attorney violated his ethical duties of competence (Rule 1.1), submission of meritorious claims and contentions (Rule 3.1), and candor toward the tribunal (Rule 3.3). For sanctions, the court imposed a $15,000 fine, referred the case to state bar regulators, and ordered that a copy of its sanctions order be given to the attorney’s client.

Finally, the ruling in Benjamin v. Costco Wholesale Corp., No. 24-cv-7399 (E.D.N.Y., April 24, 2025), issued just last week, is remarkable for the impatience the magistrate judge expressed for the entire business of AI-hallucinated errors; and for the lenience he extended to the attorney who confessed, apologized, and promised to do better. In a brief filed with the court, the attorney claimed legal support from seven different case precedents, five of which either did not exist or contained a citation to a different, irrelevant ruling.

The magistrate judge complained that the attorney’s behavior was, unfortunately, “nothing new.” Courts across the country are having to copy with attorney submissions “littered with AI-generated ‘case’ citations,” it said.

From the magistrate judge’s ruling:

These phony submissions raise serious problems. To start with the obvious, an attorney who submits fake cases clearly has not read those nonexistent cases, which is a violation of Rule 11 of the Federal Rules of Civil Procedure. As detailed below, these made-up cases create unnecessary work for courts and opposing attorneys alike. And perhaps most critically, they demonstrate a failure to provide competent representation to the client.

The attorney represented to the court that the offending pleading was drafted just hours before the filing deadline, with a generative AI tool she had never used before. The AI tool’s “phony” output was lightly customized to fit the facts of the case, and the attorney’s affirmation was pasted into the conclusion of the document. According to the court, the attorney did not make any effort to read the cases in the pleading or to cite check them. Moreover, the arguments in the pleading made little sense and were not responsive to the claims asserted by the opposing party. The court called the attorney’s conduct “grossly negligent,” incompetent, and in bad faith.

Nevertheless, the attorney escaped the harshest of sanctions. It imposed a $1,000 fine and directed the attorney to inform her client of the court’s order. The case was not referred to local bar regulators nor was her pro hac vice admission reviewed. The court said its lenience was motivated by the attorney’s candor, sincere regret, her status as a first-time offender, and her willingness to enroll in continuing legal education classes on artificial intelligence-related topics.

Lessons Learned

What can be learned from these rulings? In a nutshell, this:

  1. Courts will show litigators little or no patience when confronted with pleadings supported by AI-hallucinated case citations.
  2. No pleading should be filed with the court unless the lawyer has actually read the cited cases and verified their accuracy.
  3. Litigators should follow any local standing orders governing the use of generative artificial intelligence in pleadings.
  4. The duty imposed by Rule 11 to certify the accuracy of all matters asserted in a pleading cannot be delegated to another attorney (or vendor).
  5. Contrition is preferable to obfuscation when AI hallucinations are discovered in a pleading.
  6. Continuing legal education programs and law firm policies on responsible use of generative AI may encourage leniency from courts when AI hallucinations are discovered in a pleading.

If you ask Google Gemini for the definition of “siren’s song,” it will reply that the phrase refers something that seems tempting but ultimately leads to downfall or harm. In modern usage, “siren’s song” may refer to “the temptation of an easy solution or shortcut that may lead to failure.” So it is with generative artificial intelligence: The ability to generate superficially plausible work in seconds is alluring, but careless AI users are likely to be caught out and increasingly likely to suffer adverse consequences in case outcomes and professional reputations.

As Google itself notes, “Generative AI is experimental.” We have been warned.

Written by:

Esquire Deposition Solutions, LLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Esquire Deposition Solutions, LLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide