Risks that AI Developers Must Consider When Developing AI

The Rodman Law Group, LLC
Contact

The Rodman Law Group, LLC

 

Artificial Intelligence (AI) is not just a technological innovation; it’s a transformative force that has permeated multiple sectors, from healthcare to finance. As AI startups and companies creating AI tools (“AI Developers”) continue to proliferate, they find themselves navigating a complex and often uncertain legal landscape. This article aims to elucidate several key legal considerations that AI Developers must be aware of, as the work to find product market fit.

Liability for AI Generated Content

Section 230 of the US Communications Decency Act (“CDA”) has long been a cornerstone for online platforms, providing them with immunity from liability for third-party content.[1] This legal provision has been instrumental in fostering the growth of the internet, allowing platforms to host user-generated content without the fear of legal repercussions. However, the rise of AI-generated content has thrown a wrench into this well-oiled machine.

Recent judicial commentary, most notably from Justice Gorsuch in the case of Gonzalez v. Google, has raised questions about the applicability of Section 230 protections to AI-generated content. According to Justice Gorsuch, AI-generated content “goes beyond picking, choosing, analyzing, or digesting content,” which are activities traditionally protected under the CDA.[2] This evolving interpretation could have far-reaching implications for AI Developers, particularly those that rely on AI for content creation or curation.

If AI is actively creating or editing content, the AI Developer who created the AI could ultimately lose the immunity provided to information content providers by Section 230. This could expose such AI Developer to liability for any illegal or harmful content generated by the AI. In Walters v. OpenAI, LLC, a lawsuit currently in the Georgia state courts, Paul Walters (“Walters”), a radio host, is suing OpenAI for defamation.[3] Journalist Fred Reihl (“Reihl”), used ChatGPT to research a federal lawsuit involving the Second Amendment Foundation (the “Foundation”).[4] Based on Reihl’s prompt, ChatGPT generated a summary of the federal lawsuit. ChatGPT’s summary named Walters as a defendant, stated that he served as the Foundation’s treasurer and CFO, and that he was accused of fraud and embezzling funds from the Foundation.[5] However, Walters was not named as a defendant in the federal lawsuit, he did not serve as the Foundation’s treasurer or CFO, and he has never been accused of fraud or embezzlement.[6] ChatGPT fabricated that part of the case summary. Walters is now suing OpenAI for defamation on the basis of ChatGPT’s “hallucination.”

Based on the current interpretation of Section 230, OpenAI will not likely be entitled to immunity for this defamation claim because ChatGPT created a false narrative about Walters seeming without any input from a third-party as Reihl’s prompt regarding the federal lawsuit did not mention Walters or the crimes that ChatGPT said he was accused of. This is not a case where ChatGPT picked, chose, analyzed, or digested the content, under which circumstances OpenAI may be entitled to Section 230 immunity.

Intellectual Property Rights in AI Generated Content

In the realm of intellectual property, the law has traditionally recognized a creator’s ownership rights in a given work “the moment an idea is fixed in a tangible medium of expression.”[7] However, the advent of AI has complicated this straightforward concept. The U.S. Copyright Office’s (the “Office”) position is that authorship requires human creativity.[8] As such, the question of who owns the rights to purely AI-generated work is unresolved. Is it the company that owns the AI, the person who trained it, the coder who programmed its learning algorithm? Or perhaps the collective creators on whose works it was trained (more on that below)? As it stands, the answer seems to be “no one,” creating a vacuum that could lead to legal disputes.

The case of works containing both human-created and AI-generated content is similarly unclear. The Office has stated that in such instances, the human generated content is copyrightable while the AI generated content is not.[9] Works created by humans with the help of AI may be copyrightable if the humans exercised creative control over the work’s expression and “actually formed the traditional elements of authorship.”[10] Additionally, a human may “select or arrange AI generated material in a sufficiently creative way that the resulting work as a whole constitutes an original work of authorship.”[11] However, in this case, the copyright will only protect the “human-authored” aspects of the work which are independent of, and do not affect the copyright status of, the AI generated material itself.

Potential IP Infringement

One of the most important components of Large Language Models (“LLM”) such as ChatGPT, the most popular consumer facing AI tool, is the data the AI is trained on, which are generally referred to as data sets. The quality and size of the data set has a direct impact on the AI’s performance and accuracy. These data sets are so large that by definition no one company can create them and therefore the creator of an LLM must “scrape” published works, large databases, even the Internet as a whole to get enough data to train its AI. However, the use of third-party data can raise significant IP concerns. For example, a company that creates an AI platform to analyze customer sentiment in social media posts may inadvertently include copyrighted images or music in its data set. It is probably safe to say that, as of today, no rightsholder has given any LLM creator the right to use its intellectual property, if only because LLMs are so new to the wider public. Further, since the goal of the scraping (use) of such works is to train the LLM, a product though which its creator is almost certainly intending to turn a profit, it seems unlikely that any rightsholder would willingly give such permission without compensation.

Lawsuits alleging that AI platforms have infringed privacy rights and copyrighted works are already being filed. Companies like Microsoft and OpenAI have been sued for allegedly using personal data to train their AI models without consent.[12] Specifically, OpenAI and Microsoft of are being accused of violating the privacy of hundreds of millions internet users by using the users’ personal data to train their AI systems.[13] There are also copyright based lawsuits which allege that the datasets training these AI systems are using copyrighted material without compensating the copyright owner.[14] The outcome of these lawsuits will have a profound impact on what data companies can use to program their AI products and how that data can be used. These suits could also lead to increased protections for personal data and copyrighted material.

Limiting Your Exposure to Copyright and Privacy Risk

To mitigate these risks, AI Developers should be meticulous in obtaining the necessary licenses and permissions for any third-party content used in their data sets. This includes images, text, music, and other creative works. Additionally, AI Developers should conduct regular audits of their data sets to ensure that they are not infringing on any IP rights.

AI Developers should also develop a strong IP strategy that includes regular monitoring of their data sets to ensure that they are not infringing on any IP rights. This should include a thorough review of any third-party data sources used to train their AI tools and products, as well as an assessment of the AI Developer’s own IP portfolio to ensure that it is adequately protected. In fact, as this article was being drafted, Global media conglomerate Getty Images Holdings Inc. (“Getty”) did just that. Getty recently announced that it is working with Nvidia to launch its own generative AI.[15] Getty is ensuring that the dataset being used to train its generative AI platform is made up solely of images and data that Getty owns.[16] In doing so, Getty is ensuring that it is not violating anyone else’s IP rights. Clean datasets are the future and we expect to see other companies following in Getty’s footsteps.

The legal landscape surrounding AI is complex and ever-changing. By understanding the key issues related to liability, intellectual property rights, and the ease of infringing on such rights, companies desiring to create and/or use AI can better position themselves for success and mitigate legal risks. As the law continues to evolve, staying ahead of the curve will be crucial for the long-term viability of any AI Developer. 

[1] https://www.naag.org/attorney-general-journal/the-future-of-section-230-what-does-it-mean-for-consumers/

[2] https://www.washingtonpost.com/politics/2023/03/17/ai-chatbots-wont-enjoy-techs-legal-shield-section-230-authors-say/

[3] https://www.vedderprice.com/ai-hallucinations-can-inflict-real-world-pain

[4] Id.

[5] Id.

[6] Id.

[7] 17 USC § 102(a).

[8] Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, U.S. Copyright Office, at 16191, https://www.govinfo.gov/content/pkg/FR-2023-03-16/pdf/2023-05321.pdf.

[9] Id. at 16193.

[10] Id. at 16192.

[11] Id.

[12] https://www.reuters.com/legal/litigation/openai-microsoft-hit-with-new-us-consumer-privacy-class-action-2023-09-06/

[13] Id.

[14] https://www.klgates.com/Recent-Trends-in-Generative-Artificial-Intelligence-Litigation-in-the-United-States-9-5-2023

[15] https://www.technologyreview.com/2023/09/25/1080231/getty-images-promises-its-new-ai-doesnt-contain-copyrighted-art/

[16] Id.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© The Rodman Law Group, LLC | Attorney Advertising

Written by:

The Rodman Law Group, LLC
Contact
more
less

The Rodman Law Group, LLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide