Data privacy implications of ChatGPT

Constangy, Brooks, Smith & Prophete, LLP
Contact

Constangy, Brooks, Smith & Prophete, LLP

By now, you have probably heard about OpenAI’s ChatGPT, an artificially intelligent chatbot, and similar chatbots that have launched in its wake. 

Since its launch, ChatGPT is estimated to have reached more than 100 million users worldwide. It is remarkably advanced, although not infallible. ChatGPT can write essays on complex topics, resumes, cover letters, songs, and fiction, and even pass law school exams. ChatGPT is making waves throughout various sectors and industries and raising important questions about ethics, art, education, employment, intellectual property, and cybersecurity.

From a data privacy perspective, ChatGPT has the potential to challenge and transform privacy frameworks.  For example, the European Union and the United Kingdom grant data subjects the “right not to be subject to a decision based solely on automated processing” with certain exceptions. Those rights appear in the EU’s General Data Protection Regulation and the U.K.’s Data Protection Act 2018, the U.K.’s version of the GDPR. Automated decision-making is a serious concern as advancements in technology enable more efficient processing of data.

Because the United States does not have federal privacy legislation, California has taken the lead in advancing privacy rights for consumers, and several other U.S. states have followed suit. California’s privacy agency, the California Privacy Protection Agency, has established a subcommittee to advise on automated decision-making, so it is possible that the United States could adopt prohibitions or restrictions on automated decision-making similar to what has been done in the EU and the U.K.

Although automated decision-making can be useful for organizations, there are serious concerns and risks to individuals subject to such processes, such as adverse legal effects based on processes they may not understand or that may be exacerbating and replicating biases and discriminatory practices. For example, the American Civil Liberties Union has opined that “AI is built by humans and deployed in systems and institutions that have been marked by entrenched discrimination  . . .  bias is in the data used to train the AI . . . and can rear its head throughout the AI’s design, development, implementation, and use.” Similar concerns were raised in a 2022 Constangy webinar on AI featuring Commissioner Keith Sonderling of the Equal Employment Opportunity Commission.

Further, the Italian data protection authority is investigating additional data privacy implications of ChatGPT, such as whether it can comply with the GDPR, its legal basis for processing, collecting, and storing mass amounts of personal data, and its lack of age verification tools. In the meantime, Italy has temporarily banned ChatGPT.

How organizations will balance the utility of ChatGPT with the privacy rights of individuals, and how regulators will address the risks posed by emerging technologies will continue to unfold in the coming months and years. Because it often serves as a trailblazer in the regulation of data and technology, we will continue to provide updates on the actions of the data protection authorities of the European Union as well as the EU’s proposed regulation, the AI Act.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Constangy, Brooks, Smith & Prophete, LLP | Attorney Advertising

Written by:

Constangy, Brooks, Smith & Prophete, LLP
Contact
more
less

Constangy, Brooks, Smith & Prophete, LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide