Italy's Data Protection Agency Lifts Ban on ChatGPT

Davis Wright Tremaine LLP
Contact

Davis Wright Tremaine LLP

OpenAI, developer of the generative AI system, accepts new regulatory responsibilities to reinstate access for Italian users

Italy's Data Protection Agency (DPA) lifted a temporary ban on ChatGPT's operations in Italy after OpenAI, the purveyor of the generative AI system, agreed to implement a series of changes to its online notices and privacy policy focused on transparency, optionality, and security. As discussed in an earlier blog, regulators around the globe are focusing greater scrutiny on generative AI technologies, leading several regulatory bodies to initiate steps to regulate this technology. Italy became the first Western country to officially ban ChatGPT altogether when the country's DPA demanded a temporary halt to the processing of Italian user data due to concerns over a suspected breach of privacy regulations in Europe. Similarly, China recently published draft "Measures for the Administration of Generative Artificial Intelligence Services," which we will discuss in greater detail in a future blog post.

Regulatory Conditions for Generative AI Resuming in Italy

OpenAI responded swiftly to the Italian DPA's concerns by instituting a series of changes to ChatGPT policies and protocols, disclosing additional information about how the company collects and processes data and implementing protocols to enhance the security of the AI tool. The company's actions, including substantive modifications to how people sign up to use ChatGPT and new consumer disclosures and rights concerning how the system collects and processes personal data, means the Italian DPA has now effectively implemented and imposed the first set of rules governing the use of generative AI systems.

The changes adopted by the company in response to the Italian DPA's demands include:

  • Providing notice to users describing what personal data may be processed when training its models;
  • Granting all individuals in Europe, including non-users, the right to elect to opt-out of processing their data for training the company's models, and providing an online process to make such an election;
  • Introducing new mechanisms enabling data subjects to obtain erasure of information that is considered inaccurate ("stating that it is technically impossible, as of now, to rectify inaccuracies");
  • Clarifying that processing of certain personal data will continue based on contractual commitments while noting that the company would process users' personal data for training algorithms on the legal basis of its legitimate interest, without prejudice to users' right to opt-out of such processing;
  • Updating the sign-up page block access by users under the age of 13, and
  • Requesting parental consent for users between the ages of 13 and 18.

Meeting Regulators In the Middle Opens Door for Continued Operations

Despite the notable modifications to ChatGPT policies and procedures, not every item in the Italian DPA's original set of "required measures" was accepted or acted upon. For example, the Italian DPA initially demanded the launch of an "information campaign" that would divulge the "methods and logic" underlying ChatGPT's data processing methodology. In addition, the Italian DPA called for the removal of any reference to the "execution of a contract" as the legal basis necessary for processing a user's personal data. Instead, the Italian DPA called for exclusive reliance upon a user's consent or legitimate interests in order for the company to collect and process a user's personal data.

Nevertheless, the Italian GPA acknowledged the actions taken by OpenAI describing them as a "step forward" in reconciling "technological advancements with respect for the rights of individuals." In that sense, the response to the Italian DPA's demands could be viewed as a reasonable "middle ground" and a possible template for how other companies deploying generative AI tools could respond to international data regulators.

The Italian DPA also confirmed it would "carry on its fact-finding activities" under the "umbrella of the ad hoc task force that was set up by the European Data Protection Board."

What's Next

The response from OpenAI may provide a possible roadmap for future interactions between regulators and AI industry participants. A larger issue on the horizon is heightened regulatory scrutiny for companies utilizing AI technologies or considering new AI deployments. For example, the U.K.'s Competition and Markets Authority (CMA) announced the launch of an "initial review" of "AI foundational models" to include the large language models that underpin ChatGPT and similar AI-centric services. The CMA's announcement indicated their review would examine "competition and consumer protection considerations" in how AI foundational models are developed and utilized. The CMA's stated objective is to gain a better understanding of "how foundation models are developing and producing an assessment of the conditions and principles that will best guide the development of foundation models and their use in the future."

For companies offering, or developing, products and services predicated on AI, it is important to exercise caution and take proactive steps to comply with guidance issued by the United States Federal Trade Commission, along with other domestic and international regulatory authorities.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Davis Wright Tremaine LLP | Attorney Advertising

Written by:

Davis Wright Tremaine LLP
Contact
more
less

Davis Wright Tremaine LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide