AI And The FLSA: (Maybe) Never The Twain Shall Meet!

Fox Rothschild LLP
Contact

Fox Rothschild LLP

There has been a great deal of controversy about the use of Artificial Intelligence (AI) in all facets of life and now the legal field.  My own firm has certain very restrictive rules about lawyers using AI, especially when submitting a brief to a court.  Not every defendant or entity has the same cautious approach, and it can (easily) land someone in significant legal peril.  This maxim was highlighted in a recent decision where the defendant (who filed the appeal pro se) was ordered to pay more than $300,000 in wages to an employee and was sanctioned for using fictitious cases in a brief that AI had generated. The case is entitled Kruse v. Karlen et al. and issued from the Missouri Court of Appeals.

The Court came down very hard on the submission of this brief, asserting that “due to numerous fatal briefing deficiencies under the Rules of Appellate Procedure that prevent us from engaging in meaningful review, including the submission of fictitious cases generated by artificial intelligence, we dismiss the appeal.”  The defendant was ordered to pay an additional $10,000 to the plaintiff “in damages for filing a frivolous appeal.”

The plaintiff charged that the Company had refused to pay her wages she was owed.  A lower court Judge ruled that the Company owed her $72,936.42 in wages, $145,872.84 in damages and more than $90,000 in attorney fees.  The defendant, Karlen, filed an appeal and acted pro se.  He committed numerous errors during the appeal process, but the kicker was in the brief he filed.  As the Court observed, “particularly concerning to this court is that appellant submitted an appellate brief in which the overwhelming majority of the citations are not only inaccurate but entirely fictitious.  Only two out of the twenty-four case citations in appellant’s brief are genuine.”

The Court went on to note that the brief “offers citations that have potentially real case names — presumably the product of algorithmic serendipity — but do not stand for the propositions asserted.”  The Court also refused to accept Karlen’s “apology” in his reply brief for submitting the false cases.  The Court pronounced that the “appellant stated he did not know that the individual would use ‘artificial intelligence hallucinations’ and denied any intention to mislead the court or waste respondent’s time researching fictitious precedent.  Appellant’s apology notwithstanding, the deed had been done, and this court must wrestle with the results.”

In sum, the “bogus citations” were “a flagrant violation of the duties of candor appellant owes to this court,” the panel said.  The Court specifically referenced the case of Mata v. Avianca, a New York case from some months ago where a Judge chastised the lawyers who submitted an AI brief with fictitious citations.  The Court would not “…permit fraud on this court in violation of our rules.”

The Takeaway

This case and the New York one a few months ago are a stark warning to all lawyers who contemplate using AI. Any use of AI that involves it spewing out citations must be regarded as suspect, requiring (or so one would think) an immediate checking of the case citations themselves and the principles that these cases have ostensibly been cited for.

Or learn the hard way…

[View source.]

Written by:

Fox Rothschild LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Fox Rothschild LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide