Will Generative AI Create a Break in the Impenetrable Wall That Is Section 230?

Pillsbury Winthrop Shaw Pittman LLP
Contact

Pillsbury Winthrop Shaw Pittman LLP

[co-author: Jaria Martin]

TAKEAWAYS

  • Notwithstanding the robust protection it provides in many other contexts, Section 230 may not protect online platforms developing generative AI systems from legal liability arising from false information produced by their products.
  • Neither the courts nor Congress have yet provided reliable legal guidance on generative AI in the United States, but a Georgia state court defamation case and newly introduced legislation in the Senate present opportunities to change that.
  • Developers of generative AI systems should proceed with caution when creating new products as this area of law develops in real-time.

Overview

As people increasingly experiment with ChatGPT, Google Bard, and other generative AI systems, even using these tools in the course of their daily lives and work, the legal hot topic of the day concerns liability for the content produced by generative AI. For the last 25 years, cases addressing, arguing and deciding the application of Section 230 of the Communications Decency Act (“Section 230”) have provided clear signposts offering reliable legal guidance about responsibility for most content we see on the Internet. But when applying these precedents to generative AI products, we are in uncharted territory.

Since the 4th Circuit’s decision in Zeran v. America Online, Inc., created “a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service,” Section 230 has immunized companies from the threat of litigation arising from most content appearing on their platforms, enabling the Internet to accelerate and expand around the world. However, while there is little question that generative AI is a disruptive technology with seemingly boundless promise for a 21st-century Internet, it is unclear if Section 230 will protect platforms from the unknown—and even unintentional—harms their generative AI products might create.

Unlike the posts, comments and videos commonly found on the Internet today, which are produced independently by platform users, generative AI products receive only prompts from users, thereafter creating—as the concept suggests, ‘generating’—new content. Neither the courts nor Congress have spoken on the specific question of legal responsibility for content created by generative AI in response to user prompts, but both a plain reading of Section 230 and a bipartisan bill introduced this week in the U.S. Senate seem to signal significant legal risk for online platforms offering generative AI products.

Generative AI “Hallucinations” Likely to Create Legal Risk for Online Platforms

On June 6, 2023, Mark Walters, a radio host, filed a defamation suit in Georgia state court against OpenAI, makers of ChatGPT. Walters alleged that a journalist, Fred Riehl, had used ChatGPT while researching a federal lawsuit, Second Amendment Foundation v. Ferguson, and that the generative AI tool produced to Riehl a legal complaint accusing Walters of embezzling money from the gun rights group. ChatGPT’s case summary allegedly stated that the Second Amendment Foundation’s founder, Alan Gottlieb, was suing Walters as the foundation’s treasurer and chief financial officer for fraud and embezzlement. The catch is ... none of that was true.

Walters was not actually named in the foundation’s lawsuit; Walters has never been employed by the Second Amendment Foundation, let alone as treasurer and CFO; and furthermore, the actual case did not arise from a fraud claim at all. Fabrications such as these by generative AI systems are known as “hallucinations.” When contacted by Riehl, who was seeking clarification for his news story, Gottlieb confirmed that Walters had nothing to do with the Second Amendment Foundation’s lawsuit. The complaint created by ChatGPT was inaccurate—a hallucination.

Section 230 protects platforms from being “treated as the publisher or speaker of any information provided by another information content provider,” but it doesn’t necessarily shield platforms when they are “responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” In other words, although they are immune from liability for harm caused by most user-generated content, platforms remain legally liable for their own online content or online content they helped create. The content underlying Walters’ suit originated with a prompt entered by Riehl, but it was ChatGPT, OpenAI’s generative AI product, that “create[ed] or develop[ed]” the false and potentially defamatory information. Thus, the plain language of Section 230 seems to offer OpenAI no protection from Walters’ claims.

It also appears that leaders on both sides of the political aisle would deny OpenAI immunity under Section 230 from harm caused by false information generated by ChatGPT. On June 16, 2023, bipartisan legislation dubbed the “No Section 230 Immunity for AI Act” was introduced in the Senate by co-leaders of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, Sen. Richard Blumenthal (D-Conn) and Sen. Josh Hawley (R-Mo). Their bill would “waive immunity under section 230 of the Communications Act of 1934 for claims and charges related to generative artificial intelligence” and give those harmed by generative AI a legal course of action in either state or federal court. Notwithstanding other suits targeting OpenAI with copyright-related claims, Walters’ suit appears to be the first asserting a cause of action arising from online content “originating with a third-party user,” from which online platforms have been immune since Zeran.

It is perhaps no accident that this proposed legislation follows congressional testimony on May 16, 2023, by Sam Altman, OpenAI’s CEO, saying it is “essential to develop regulations” surrounding AI safety, given “potential for AI tools to contribute to disinformation campaigns.” And, although it is hard to see what harm he may have suffered here, Walters’ lawsuit appears to be of the nature contemplated by the Blumenthal/Hawley legislation to hold platforms accountable for false information produced by their products. While still unclear whether the courts or Congress will ultimately decide generative AI’s fate with respect to Section 230, there should be no illusions surrounding the meaningful legal risk involved with developing (and defending) generative AI products.

Conclusion

Despite many unanswered questions surrounding the legal future of generative AI, the bipartisan attack on Section 230 with respect to content created using such tools suggests that some will be answered in court. Thus, the spectrum of legal risk related to generative AI systems extends into areas from which online platforms have been mostly shielded for well over two decades. It remains to be seen whether guardrails can be put in place to protect platforms in the vanguard of this important new technology, and until the courts act or Congress passes applicable legislation, online platforms should proceed with caution, following cases like Walters closely to see how this novel area of the law unfolds.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Pillsbury Winthrop Shaw Pittman LLP | Attorney Advertising

Written by:

Pillsbury Winthrop Shaw Pittman LLP
Contact
more
less

Pillsbury Winthrop Shaw Pittman LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide