A Quick Lesson on Harnessing Artificial Intelligence

BCLP
Contact

A recent case highlights another area of potential legal risk with using generative AI: liability for content when AI simply gets the facts wrong.

Earlier this month, a radio host sued OpenAI in Georgia state court, claiming OpenAI’s generative AI program – ChatGPT – defamed him. According to the plaintiff, when a third-party journalist prompted ChatGPT to summarize a separate complaint filed in a federal district court in Washington, ChatGPT allegedly fabricated an entirely new lawsuit naming the radio host as a defendant. This non-existent lawsuit, the radio host claims, asserted claims of fraud and embezzlement, which he asserts constituted defamation when ChatGPT “published” its summaries to the journalist. The journalist, for his part, did not republish the ChatGPT summaries. Instead, the radio host’s allegations indicate the journalist contacted one of the plaintiffs in the original lawsuit pending in Washington, who allegedly confirmed the falsity of ChatGPT’s summaries.

Putting aside the odd factual circumstances of this case (which raise the question of whether the radio host’s reputation was meaningfully damaged by an alleged publication to a single journalist who did not trust the summary enough to publish it), this alleged “hallucination” by ChatGPT of a false factual narrative is a cautionary tale concerning the legal hazards of uncritical reliance on generative AI outputs.

First, if the journalist had published the summary generated by ChatGPT, the journalist (and publication publishing the piece) would have faced serious legal risk for republishing the false narrative. This is because, under the law, generally “tale bearers are as bad as tale makers.”  In other words, “ChatGPT told me this content is true, and I can therefore publish it” is unlikely to accomplish much as a defense if relying on ChatGPT was unreasonable under the circumstances.

Second, this lawsuit underscores why it is important for companies to develop comprehensive internal policies regarding the use (or prohibition on the use) of generative AI in their businesses. Generative AI technology is incredibly powerful, but for its benefits to outweigh the risks, companies should exercise robust oversight regarding their employees’ use of such technology. While the OpenAI case involves a defamation claim, the issue of potentially false narratives being produced by AI has implications in multiple scenarios. For example, lawyers in New York were recently sanctioned when they included made-up case citations generated by ChatGPT in a legal brief. Likewise, without proper oversight, it is not hard to imagine being subjected to false advertising, unfair competition, or tortious interference claims (or even regulatory investigations) if a press release, advertisement or other marketing collateral contains false facts or claims about competitors or your own products.  Again, a defense of “but AI created the content” is unlikely to find favor in the courts.

Third, the concern with the business use of AI is not limited only to content created by generative AI: the data that employees feed into generative AI programs to get the outputs in the first place are also subject to data privacy and regulatory issues which should be of equal concern.  This data entry poses its own risks concerning the proper handling of customer data, third-party intellectual property, and other types of information that should receive a second look before using them to create AI-generated outputs.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© BCLP | Attorney Advertising

Written by:

BCLP
Contact
more
less

BCLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide