Judges Issue Standing Orders Regarding the Use of Artificial Intelligence

McDonnell Boehnen Hulbert & Berghoff LLP
Contact

McDonnell Boehnen Hulbert & Berghoff LLP

The impact of generative artificial intelligence (AI) is unsurprisingly significant in the field of education, with some teachers and professors responding by instituting oral examinations, handwritten essays, or requiring that first drafts of written material can only be composed on "locked down" computers with no access to AI tools.  But as the education system (as just one example) is wrestling with the implication of these tools, so is the legal community.

In a recent case that has rocketed into infamy, two lawyers filed a brief in the Southern District of New York that had been written at least in part by the large language model (LLM) ChatGPT.[1]  After opposing counsel and the judge determined that the brief cited to case law that did not exist and the quotes from these fictitious cases were fabrications by ChatGPT, the court imposed sanctions under Rule 11 for purposes of deterrence.  The lawyers were ordered to pay a $5,000 penalty.  Their infraction, which was described in detail by the court, was not the mere use of generative AI, but failing to properly cite check and otherwise vet a brief in a judicial proceeding.

Perhaps in response to this case, we have seen a number of judges issue standing orders on how AI can and cannot be used in proceedings before them.

Eastern District of Pennsylvania Judge Michael M. Baylson published an order on June 6 which states:

If any attorney for a party, or a pro se party, has used Artificial Intelligence ("AI") in the preparation of any complaint, answer, motion, brief, or other paper, filed with the Court, and assigned to Judge Michael M. Baylson, MUST, in a clear and plain factual statement, disclose that AI has been used in any way in the preparation of the filing, and CERTIFY, that each and every citation to the law or the record in the paper, has been verified as accurate.[2]

While Judge Baylson is engaging in an earnest attempt to avoid a mess like the one in New York, his order is overly broad.  Using AI tools such as ChatGPT, Bard, and the like is currently an intentional act on the part of the user.  In the near future, however, as these tools are integrated into legal search and word processing software, lawyers may not know -- and have no reasonable way of finding out -- whether AI has been used at any point during preparation.  For example, are the case summaries provided by your favorite search engine the result of human effort, AI, or both?  Likewise, is the grammar suggestion provided by your word processor the output of AI or a rules-based algorithm?

When considering these issues, it is important to keep in mind the differences between traditional AI and generative AI.  Traditional AI is trained to address specific fields or problems and typically is a form of classifier.  Examples include spam filtering, image classification, speech recognition, and recommendation systems.  Generative AI, on the other hand, is capable of creating new content that is open ended and often not limited to any particular field.  Current generative AI tools include ChatGPT and Bard, but also image generation tools (Dall-E, Stable Diffusion, and Midjourney), as well as music composition tools (no suggestions here as I've yet to find one that allows the non-musician to generate high quality music of a variety of styles from a simple prompt).

In short, traditional AI and generative AI are different animals.  Traditional AI is everywhere already but useful only in limited ways, whereas we are collectively kicking the tires of generative AI but its eventual footprint is likely to be enormous.

In not differentiating between traditional and generative AI, Judge Baylson's order -- if read strictly -- puts a significant burden on lawyers appearing in his court, especially those without a technical background.  Luckily, two other judges have issued orders that are more focused.

U.S. Court of Trade Judge Stephen Alexander Vaden is concerned with the risk of disclosing confidential information to the entities operating generative AI tools.  His order reads:

Generative artificial intelligence programs that supply natural language answers to user prompts, such as ChatGPT or Google Bard, create novel risks to the security of confidential information.  Users having "conversations" with these programs may include confidential information in their prompts, which in turn may result in the corporate owner of the program retaining access to the confidential information.  Although the owners of generative artificial intelligence programs may make representations that they do not retain information supplied by users, their programs "learn" from every user conversation and cannot distinguish which conversations may contain confidential information . . .

Because generative artificial intelligence programs challenge the Court's ability to protect confidential and business proprietary information from access by unauthorized parties, it is hereby:

ORDERED that any submission in a case assigned to Judge Vaden that contains text drafted with the assistance of a generative artificial intelligence program on the basis of natural language prompts, including but not limited to ChatGPT and Google Bard, must be accompanied by:

(1) A disclosure notice that identifies the program used and the specific portions of text that have been so drafted;

(2) A certification that the use of such program has not resulted in the disclosure of any confidential or business proprietary information to any unauthorized party.[3]

These two requirements are simple -- lawyers can use LLMs to assist with submissions, but must notify the court that they did so and attest to not having disclosed a party's confidential information to such tools.  This will incentivize the lawyers to think twice before they submit a ChatGPT prompt such as "Write a legal argument that [trade secret] was improperly obtained by John Smith based on [factual allegations]."

Finally, Judge Arun Subramanian of the Southern District of New York has issued a simple yet balanced and effective order:

Use of ChatGPT and Other Tools.  Counsel is responsible for providing the Court with complete and accurate representations of the record, the procedural history of the case, and any cited legal authorities.  Use of ChatGPT or other such tools is not prohibited, but counsel must at all times personally confirm for themselves the accuracy of any research conducted by these means.  At all times, counsel—and specifically designated Lead Trial Counsel—bears responsibility for any filings made by the party that counsel represents.[4]

In a minimally restrictive fashion, Judge Subramanian reminds lawyers that they are ultimately responsible for the veracity and accuracy of their filings.  This is not unlike reminding senior lawyers that they need to review and check the work of their junior associates.

To be certain, these are not the only standing orders on generative AI that we will see.  Within a few months it may be rare for any judge not to have such an order in place.  Eventually, the gist of such orders will likely be synthesized into a standard of practice adopted by the vast majority of the judiciary.

Of course, this begs the question of whether such a standard of practice will also put disclosure, confidentiality, and veracity requirements on judges' use of generative AI as well.

[1] https://storage.courtlistener.com/recap/gov.uscourts.nysd.575368/gov.uscourts.nysd.575368.54.0_2.pdf.

[2] https://www.paed.uscourts.gov/documents/standord/Standing%20Order%20Re%20Artificial%20Intelligence%206.6.pdf.

[3] https://www.cit.uscourts.gov/sites/cit/files/Order%20on%20Artificial%20Intelligence.pdf.

[4] https://www.nysd.uscourts.gov/sites/default/files/practice_documents/AS%20Subramanian%20Civil%20Individual%20Practices.pdf.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© McDonnell Boehnen Hulbert & Berghoff LLP | Attorney Advertising

Written by:

McDonnell Boehnen Hulbert & Berghoff LLP
Contact
more
less

McDonnell Boehnen Hulbert & Berghoff LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide