Friend or Foe? Legal Risks Arising From ChatGPT and Other Generative AI Software

Bailey & Glasser LLP
Contact

Bailey & Glasser LLP

[co-author: Abraham Reiss]

Introduction

Recent breakthroughs in generative artificial intelligence (AI) have captured significant media attention. Developers argue that the technology, which learns from data to produce new text, visual, or audio content based on a user’s prompt, will turbocharge productivity and revolutionize business. Organizations in sectors ranging from banking to health care to journalism are already exploring integrating tools like OpenAI’s ChatGPT chatbot and DALL-E image generator into their workplaces.

These new tools should be approached with a great deal of caution as introducing generative AI into your business could create a complex minefield of legal risks. The technology raises significant dangers related to breaches of confidentiality and data privacy, intellectual property infringement, obligations to consumers, and liabilities for negligence, defamation, or discrimination related to the use of false or biased information.

Earlier in June, a Manhattan lawyer faced sanctions in federal court for filing a legal brief generated by ChatGPT which included several citations to nonexistent cases. After being scolded by the judge for relying on “legal gibberish” generated by AI, the attorney admitted that he had no idea that ChatGPT could fabricate cases. At a recent United States Senate hearing on the dangers of AI and potential regulatory safeguards, OpenAI’s CEO, Sam Altman, practically begged lawmakers to create a new AI regulatory agency that would license, test, and screen AI models. This is an unprecedented act by a tech leader. Around the world, authorities are eager to tighten regulations related to AI and changes may be on the horizon.

In this article, legal dangers related to the use of generative AI will be discussed in five specific areas: confidentiality and data privacy, intellectual property, obligations to consumers, false information, and bias and discrimination.

Generative AI and Gibberish

Artificial intelligence generally refers to technology that utilizes data to perform tasks typically done by humans, such as analysis, pattern-recognition, and prediction. One particular subset of AI technology—generative AI—is responsible for the current cultural and corporate metamorphosis. Of the fleet of emerging generative AI products, ChatGPT has grabbed the greatest share of headlines. AI developer OpenAI released the online chatbot in November 2022, with bankrolling from Microsoft. The company launched an update, GPT-4, in March 2023.

ChatGPT has impressed—and even stunned—with its ability to create unique content that sounds convincingly human. OpenAI’s system and rival tools from Google and Bing are what AI developers call “large language models” (LLMs). Using a huge library of text data that includes books, articles, research papers, blogs, and social media posts, LLMs are “trained” to decode, analyze, and produce language.These AI applications can process and respond to a user’s prompts in an instant. They’re capable of handling requests that are far more sophisticated than simple web searches.

For instance, ChatGPT will eagerly respond to an essay question on the Roman Empire, craft a Shakespearean sonnet about any subject, suggest improvements to computer code, or devise a reply to your mother-in-law’s email. Other popular generative AI models can design graphic art, replicate voices, produce songs, or even put together a rudimentary sitcom episode. Many of these new tools are widely available online and at no charge, or with a relatively modest subscription fee.

An Overview of Select Legal Risks

The power of generative AI technology is already transforming the workforce, and in ways we never could have imagined just a year ago. LLMs like ChatGPT are now instantly finishing certain research, analysis, writing, or administrative tasks that would take hours for a human employee to complete. However, the risks associated with implementing generative AI tools are extensive and demand careful consideration. We are continuing to monitor developing elements of these risks as they unfold.

1. Confidentiality and Data Privacy

Tools like ChatGPT could cut workloads by quickly synthesizing large amounts of text, yet users should be highly discerning with the documents, information, and data they share with these programs. Uploading privileged information into a generative AI platform could be construed as sharing it with a third party, thereby potentially violating the attorney-client privilege, contractual confidentiality terms, or privacy statutes such as HIPAA.

Notably, ChatGPT and its competitors do not protect user inputs—once information is entered into the software, there is no guarantee that it will be treated as private. Often, these platforms retain the right to keep user input information for training purposes. A generative AI model “learning” from user data might even integrate these inputs into its later outputs, introducing the possibility of leaking protected information. For exactly these reasons, Samsung engineers in Korea reportedly ended up in hot water in April after feeding ChatGPT confidential source code and meeting records. Samsung subsequently banned its employees from using any generative AI at all.

Recommendations:
– Review the data policies of generative AI products when considering using them.
– When possible, “opt-out” from sharing inputs with developers as training data.
– Avoid sharing proprietary information with generative AI models.
– Develop and circulate workplace policies and warnings regarding the confidentiality dangers connected to generative AI.

2. Intellectual Property

Using AI tools could raise copyright liabilities, as the models use source data in ways that arguably infringe on IP. Generative AI developers OpenAI, Midjourney, and Stability AI are currently fighting lawsuits from artists, programmers, and stock photography purveyor Getty Images alleging that the platforms “scraped” protected work from the internet without first requesting the owners’ permission.

Further, using AI-generated content that resembles copyrighted material might also invite intellectual property claims. By deploying products and services reliant on AI generation, businesses might inadvertently infringe existing protected work.

While the current legal battles related to “scraping” practices center on the misuse of images and code, text generators could also introduce the possibility of accidental plagiarism, should an AI-generated output closely resemble a published writing.

Work generated using AI could also prove difficult to license. In February, the U.S. Copyright Office declined to issue protection for AI-generated artwork in a comic book, judging that the work was “not the product of human authorship.” In August 2022, the U.S. Court of Appeals for the Federal Circuit backed a decision by the U.S. Patent and Trademark Office (USPTO) to reject crediting AI as an inventor.

Recommendations:
– Consult with IP counsel to weigh the infringement risks of using AI programs to create content, design products, or provide services.
– Be aware of the significant barriers related to securing ownership and authorship credit for work created using generative AI.
– Take steps to secure copyright, trademark, or patent protection for original content and technology as a safeguard against unauthorized AI sourcing.

3. Obligations to Consumers of AI Products and Services

Integrating generative AI technology into products and services offered to consumers, such as customer service chatbots, would likely involve legal obligations related to transparency and data privacy.

Misrepresenting an AI as human or advertising AI-generated work as human-produced could qualify as unfair and deceptive trade practice, raising legal risk. The Federal Trade Commission (FTC) or state consumer protection authorities could potentially investigate misleading uses of AI and bring enforcement action.

Regulation related to the collection of consumer data in AI interactions may also come into play. Under data privacy rules such as the California Consumer Privacy Act (CCPA) or the EU’s General Data Protection Regulation (GDPR), purveyors of AI-powered products may be required to provide notice to users regarding any collection of their data, acquire consent, and offer an opt-out.

Recommendations:
– Per FTC suggestion, be transparent with customers about interactions with AI (i.e., provide notice when customer service is provided by an AI chatbot, advise clients on how AI tools will contribute to the generation of a product or service).
– Carefully review the applicable regulatory landscape and ensure data privacy practices are compliant.

4. False Information (“Hallucination”) Risks

Generative AI tools have captured so much attention because of their shocking ability to create a wide range of distinctive, high-quality content. However, the same design elements that make generative AI models so powerful also bring unpredictability and inconsistency. While some outputs are awe-inspiring, others may be odd, disturbing, or blatantly false. Since generative AI forms its responses from a colossal index of internet data, “bad information” can influence its work. For an LLM tool like ChatGPT, information from disreputable sources may cause episodes in which the AI confidently suggests a false or nonsensical answer, a phenomenon which some experts have labeled “hallucination.”

Mistakes can’t only be blamed on bad data, however. Critically, generative AI chatbots cannot use logical reasoning—they can only mimic it. Even when source data is completely factual, AI might still use it in counterintuitive ways, ending up with illogical “gibberish.”

The technology’s error-prone tendencies could prove disastrous, especially in high-risk fields. From a business perspective, decisions made based on false AI-generated information could damage relationships and cause reputational harm, leading to significant costs. Hallucination could also introduce legal liabilities as reliance on errant AI could lead to extensive harm. Imagine a doctor relying on inaccurate AI analysis to treat a patient or a financial manager making investment decisions based on dubious AI-powered research. In such cases, those involved might face professional sanctions—as in the case of the attorney who cited nonexistent cases—and could even be found negligent and end up on the hook for tort damages.

An AI’s mistakes might also lead to the distribution of false information, raising the possibility of a libel or defamation lawsuit. If a news organization reported AI-generated falsehoods as fact, for instance, an aggrieved party might bring charges.

Recommendations:
– Develop risk management systems under which all AI-generated work is thoroughly vetted.
– Make all employees aware of the software’s propensity to fabricate and the risks associated with this phenomenon.
– Carefully evaluate contracts with vendors of generative AI software and consider proposing including indemnification clauses for injuries and damages resulting from the use of the technology.

5. Bias and Discrimination

Since generative AI tools are influenced by their source data, racism, sexism, age discrimination, and other forms of bias and discrimination in online content can corrupt outputs.

While generative AI technology only recently entered the marketplace, AI algorithms have long been shown to discriminate when adapted for decision-making processes such as tenant selection, hiring, and financial lending. Using generative AI for similar selection processes poses the same risk of perpetuating existing biases and inequities along the lines of race and ethnicity, gender, age, religion, and other characteristics. Without proper safeguards, relying on generative AI to inform decisions could lead to violations of federal antidiscrimination laws, such as those administered by the Equal Employment Opportunity Commission (EEOC), Title VII of the Civil Rights Act of 1964, Age Discrimination in Employment Act (ADEA), Genetic Information Nondiscrimination Act of 2008 (GINA), and the Americans with Disabilities Act of 1990 (ADA).

Recommendations:
– Identify areas where potential AI biases could lead to discriminatory impact (such as employment programs) and enlist outside counsel to carefully monitor the use of the technology with an eye towards these risks.
– Carefully evaluate external AI vendors and their efforts to reduce biases in their models, including how they compile and vet data sources.
– Mandate thorough human review of all AI-assisted decision-making.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Bailey & Glasser LLP | Attorney Advertising

Written by:

Bailey & Glasser LLP
Contact
more
less

Bailey & Glasser LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide