AI Legal News Summer Roundup: Edition 1

White & Case LLP

White & Case Tech Newsflash

Welcome to the first edition of our AI Legal News Summer Roundup! With both the temperature and artificial intelligence developments heating up, the time is ripe to provide periodic updates this summer on recent legal developments in the quickly changing landscape of AI, including generative AI. A slight change of pace from our more in-depth Tech Newsflash articles, we hope these snapshots help keep you up-to-date with current developments.

So far this summer, we have observed the following:  In the United States, while Government hearings and calls from lawmakers regarding AI disclosures and regulation are increasing, a key development, when compared with other parts of the world, is the growing number of lawsuits directed towards both training data for AI systems and AI generated outputs, and, more recently, AI systems themselves.  Such suits have included claims of direct and vicarious copyright infringement, trademark infringement, defamation, violation of rights of publicity, privacy violations, violations of certain statutes, unfair competition laws, negligence, and violations of property rights. Perhaps unsurprisingly, we have not seen the same level of litigation in other jurisdictions, with most instead focusing on government regulation, action and inquiries.  For example, the European Union is already well advanced on the regulatory side with the proposed introduction of the Artificial Intelligence Act and much political debate.  As such, our News Roundup will cover select key developments from our colleagues in certain White & Case offices covering anything from recent lawsuits to governmental and intergovernmental actions and developments and even industry developments when relating to addressing legal risk.  

In this first issue, we focus on certain key developments in the United States, United Kingdom, Europe and APAC from June 26 to July 14, 2023:

1. United States: Multiple class actions filed against providers of foundational generative AI large language models (LLMs) and technologies

Over a period spanning less than a month, five class actions have been filed against providers of foundational generative AI LLMs and technologies, including OpenAI, Microsoft, Meta and Alphabet/Google. Each class action was filed in the U.S. District Court for the Northern District of California.

a. In P.M. et al v. OpenAI LP et al, No. 3:23 Civ. 3199 (N.D. Cal. Jun. 28, 2023), on June 28, a group of unnamed plaintiffs filed a class action lawsuit against OpenAI and Microsoft, citing privacy concerns over scraping of personal data. The putative class includes adults and minors who either used ChatGPT and/or claim their data was stolen to train the Defendants' generative AI products (including ChatGPT, Dall-E, and Vall-E). Among other assertions, the Plaintiffs claim that the web-scraping performed to train the AI products constituted theft and misappropriation, and "violated both the property rights and privacy rights of all individuals whose personal information was scraped." The complaint also cites violations of the Computer Fraud and Abuse Act, various state consumer protection acts, negligence, invasion of privacy, intrusion upon seclusion, larceny / receipt of stolen property, conversion, unjust enrichment, and failure to warn. The Plaintiffs seek injunctive relief and the implementation of safeguards, protocols, and third-party oversight. This is the first case in the United States alleging privacy violations against an LLM provider like OpenAI.

b. On the same day, in Tremblay et al v. OpenAI, Inc. et al, No. 4:23 Civ. 3223 (N.D. Cal Jun. 28, 2023), Paul Tremblay and Mona Awad filed a class action lawsuit against OpenAI alleging copyright infringement, violations of the Digital Millennium Copyright Act and California common law unfair competition laws, as well as unjust enrichment and negligence. The Plaintiffs allege direct copyright infringement in two respects: (1), that OpenAI made copies of Plaintiffs' (a putative class of authors) copyrighted books in training its LLMs, and (2), that the LLMs themselves are infringing derivative works as they cannot function without the expressive information extracted from the Plaintiffs' works. The Plaintiffs also allege vicarious copyright infringement, arguing that every output of the LLMs is an infringing derivative work because it is based on expressive information extracted from the Plaintiffs' works. The Plaintiffs request permanent injunctive relief, as well as statutory damages and actual damages, among additional forms of relief. This is the first case in the United States that has alleged copyright infringement with respect to the training data (i.e., the AI input), the LLMs themselves, and the AI generated outputs.

c. Represented by the same law firm as Tremblay and Awad, on July 7, Sarah Silverman, Christopher Golden, and Richard Kadrey filed two class action lawsuits: (1) one lawsuit against OpenAI in Silverman et al v. OpenAI, Inc. et al, No. 3:23 Civ. 3416 (N.D. Cal Jul. 7, 2023) with respect to its LLMs; and (2) one lawsuit against Meta Platforms in Kadrey et al v. Meta Platforms, Inc., No. 3:23 Civ. 3417 (N.D. Cal Jul. 7, 2023) with respect to its LLaMA language models. These lawsuits also involve copyrighted books and contain substantially similar allegations to Tremblay and Awad's lawsuit.

d. Represented by the same law firm as P.M. et al, on July 11, a group of unnamed plaintiffs filed a class action lawsuit against Alphabet, Google DeepMind and Google in J.L. et al v. Alphabet Inc. et al, No. 3:23 Civ. 3440 (N.D. Cal Jul. 11, 2023), alleging violation of property, privacy and copyright laws. Similar to the OpenAI and Meta class action lawsuits (discussed above in (b) and (c)), the Plaintiffs allege Google's AI products (including Bard, Imagen, MusicLM, Duet AI and Gemini) directly and vicariously infringe copyright by (1) using copyrighted materials to train its products, and (2) reproducing, displaying and creating derivative works of the copyrighted works in product output (including that Bard is itself a derivative work of copyrighted materials). The complaint also cites violations of California Unfair Competition Law and the Digital Millennium Copyright Act, negligence, invasion of privacy, intrusion upon seclusion, conversion, larceny/receipt of stolen property, and unjust enrichment. Plaintiffs request injunctive relief in the form of a temporary freeze on commercial development and use of the products until Google has implemented certain specified practices as well as damages, restitution, and disgorgement and other forms of relief. This is the first lawsuit that makes such allegations with respect to a generative AI with text-to-music capabilities.

2. United States: FTC warns of competition concerns regarding control of AI inputs and launches investigation into OpenAI's privacy and data security practices

In a blog post on June 29, the Federal Trade Commission (FTC) highlighted their unfair competition concerns regarding generative AI tools if "a single company or handful of firms controls" one of the essential data inputs the tech relies on. The FTC noted that the large amounts of data needed to "pre-train a generative AI model from scratch" may prevent new entrants to the market. Additionally, the FTC cited a lack of talent and computational resources as additional barriers to entry, which has the potential of foreclosing competition to incumbents that control such "key inputs." Last week, as revealed in a leaked 20-page letter obtained by the Washington Post, the FTC put OpenAI on notice that the agency is investigating OpenAI's privacy and data security practices and is requiring OpenAI to answer a number of detailed questions, including regarding how OpenAI trains its AI technology and the risk-assessment policies and procedures OpenAI has in place. The letter states that the subject of the investigation is whether OpenAI "has (1) engaged in unfair or deceptive privacy or data security practices or (2) engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm" and "whether Commission action to obtain monetary relief would be in the public interest." This is the first publicly reported FTC investigation into a generative AI company.

3. United States: Colorado Senator requests tech companies update policies and label AI-generated content to combat misinformation

On June 29, Colorado Senator Michael Bennet issued a letter calling on tech leaders to label AI-generated content, particularly in political communications, as such content can have a direct impact on voting and "Americans' confidence in the authenticity of campaign materials." The senator highlighted the need for "clear, conspicuous labels" for AI-generated content, so users and voters are not forced to "guess the authenticity of content shared by political candidates, parties, and their supporters." This continues the trend of Members of Congress in United States calling for transparency and accountability in using AI generated content, including the REAL Political Advertisements Act (a bill co-introduced by Bennet in May requiring any federal political ad campaigns using AI-generated content to include a disclaimer regarding such content) and the AI Disclosure Act of 2023 (a bill introduced by U.S. Representative Ritchie Torres in June requiring disclaimers on AI-generated material more generally).

4. United States: Shutterstock and Adobe offer legal protection for enterprise customers using generative AI

Shutterstock and Adobe each launched generative AI tools earlier this year (OpenAI DALL-E 2 engine and Firefly, respectively), and both companies have recently taken steps to protect enterprise customers using these solutions:

a. Adobe's website states that its Firefly tool is "safe for commercial use" as it is "trained on Adobe Stock images, openly licensed content, and public domain content where copyright has expired." Adobe also offers an IP indemnity for Firefly output covering "claims that allege that the Firefly output directly infringes or violates any third party's patent, copyright, trademark, publicity rights or privacy rights."1

b. Shutterstock's website states that Shutterstock ensures that it trains the company's generative AI models on works of artists who are compensated accordingly by the company. On July 11, Shutterstock introduced indemnification for AI-generated images, extending the same indemnity protection it provides for standard content to any AI generated content that has been reviewed by Shutterstock internal (human) experts.2

5. China: New rules released for generative AI services

On July 13, the Cyberspace Administration of China released interim measures for the management of generative AI services (Interim Measures). In addition to general requirements for generative AI services to abide by current laws and administrative regulations (including respecting intellectual property rights), the Interim Measures require such services to also respect social morality and ethics and abide by provisions including adherence to the core values of socialism and prevention of discrimination. The Interim Measures reflect the government's desire to promote the healthy development and standardized application of generative AI, safeguard national security and social public interests, and protect the legitimate rights and interests of citizens, legal persons and other organisations. The Interim Measures come into force on August 15, 2023.

6. Australia: New South Wales Parliament establishes AI parliamentary inquiry and  Australian Federal Government seeking input on “Safe and Responsible AI” 

On June 27, the New South Wales (NSW) Parliament established a parliamentary inquiry into AI in NSW, to be conducted by the Premier and Finance Committee. According to the terms of reference, the inquiry will be broad-reaching and consider, among other matters, the current and future extent, nature and impact of AI in NSW, the effectiveness and enforcement of Commonwealth and NSW laws and regulations regarding AI, and whether current NSW laws regarding AI are fit for purpose. Submissions close on October 20, 2023.  The Australian Federal Government also recently released a consultation paper, "Safe and responsible AI in Australia," and is currently seeking submissions. The paper focuses on identifying potential gaps in Australia's existing domestic governance landscape and any possible additional governance mechanisms to support the development and adoption of AI. Submissions close on July 26.

7. Singapore: Monetary Authority of Singapore releases open-source AI toolkit

On June 26, the Monetary Authority of Singapore released the Veritas Toolkit version 2.0, an open-source toolkit, to enable the responsible use of AI in the financial industry, to promote responsible AI ecosystems, and to assist in carrying out the assessment methodologies to assess compliance with the Fairness, Ethics, Accountability and Transparency (FEAT) principles. Introduced in November 2017, the FEAT principles provide guidance to firms offering financial products and services on the responsible use of AI and data analytics, in order to strengthen internal governance around data management and use.

8. Singapore and UK: Singapore and the UK sign agreements on emerging technologies and data to deepen research and cooperation

Singapore's Ministry of Communications and Information and the Smart Nation and Digital Government Office and the United Kingdom's Department for Science, Innovation and Technology recently committed to lead new advances on the government's use of data and new technologies with two new agreements to deepen research and regulatory cooperation. The Memorandum of Understanding on Emerging Technologies commits to sharing experiences around the building of new telecommunications infrastructure, promoting business partnerships on AI, identifying "trustworthy" use of AI, aligning technical standards for the use of AI, and considering how AI can improve health services.3 The Memorandum of Understanding on Data Cooperation commits to increasing digital trade between the countries, and more dialogue and knowledge sharing.4

9. Japan and EU: Japan-EU Digital Partnership discuss mutual investment in chipmakers, data transfer infrastructure, and generative AI

On July 3, the Japan-EU Digital Partnership Council (Council) held its first ministerial-level meeting to discuss a comprehensive range of digital issues. During the meeting, the EU Commissioner for Internal Market, Thierry Breton, and representatives of Japan's digital, communications, and industry ministries confirmed their commitment to the partnership that will aim to advance cooperation on digital issues and promote a human-centric digital transformation based on shared democratic principles and fundamental rights. Both sides intend to establish a permanent communication channel to update each other regularly on respective legislative and non-legislative frameworks aimed at realizing trustworthy AI. The Council intend to meet again in 2024.

Avi Tessone, David Marantes (White & Case, Summer Associates, New York), Campbell Fredericks and Emma Hallab (White & Case, Vacation Clerks, Sydney) contributed to the development of this publication.

1 Adobe, Firefly Legal FAQs – Enterprise Customers, June 12, 2023.
2 Shutterstock, Enjoy peace of mind with full legal protection on AI-generated images.
3 UK-Singapore data and tech agreements to boost trade and security, UK Government: Press Release, June 28, 2023.
4 Id.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© White & Case LLP | Attorney Advertising

Written by:

White & Case LLP
Contact
more
less

White & Case LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide