AI Warning: ChatGPT Blocked for Data Laws Breach

Pillsbury Winthrop Shaw Pittman LLP
Contact

Pillsbury Winthrop Shaw Pittman LLP

The future development and use of AI systems is under fire as an EU regulator blocks ChatGPT for GDPR breaches.

TAKEAWAYS

  • The Garante, the Italian data regulator, has blocked high-profile AI tool ChatGPT for various breaches of the GDPR (the EU’s General Data Protection Regulation).
  • According to the regulator, OpenAI, ChatGPT’s developer, has likely unlawfully processed personal data of high numbers of people (including children), included false information in data sets, is subject to security concerns and is in breach of various other GDPR requirements.
  • This could have major implications for users and providers of AI tools, and may also lead other enforcers to enact bans and investigations.

Wherever you are located, you need to be mindful of various laws around the world that may apply to your development and use of AI. Recent laws proposed in Europe (e.g., the AI Act) have attracted a lot of attention, but it can often be a mistake to overlook other laws that can apply and are currently in force, such as the General Data Protection Regulation (GDPR). The Italian data regulator’s enforcement action against OpenAI and ChatGPT this past week reminded everyone that laws such as GDPR do indeed impact the creation, development and use of AI.

As we see rapid growth of AI, some may have forgotten that the system that underpins AI involves the processing of huge amounts of data. The way in which the system is trained or renders results may involve processing that is in breach of the law.

Another added factor to remember is that being based in the United States or other non-European country will not save you. GDPR has various extra-territorial triggers that can see the law apply. Having no establishment in the EU also means you could be exposed to multiple regulators in different countries that may take action against you.

Although the proposed AI Act will bring in a new law specifically drafted to regulate AI, which, for example, will categorize different types of AI and group them according to risk, breach of existing data laws must not be overlooked and can have serious consequences for both developers and users. The EU regulator action against ChatGPT has highlighted this very real risk.

Various concerns were flagged, including, amongst other things, questions regarding how the bot was trained, what data was used for training, the origins of that data, whether consent was obtained, and, if not, what was the basis for processing the data this way, what controls were used for children’s data, whether the data contains false information, how individuals can exercise GDPR rights (such as to rectify errors) and so on.

Under the GDPR there must be a legal basis for processing (the most obvious is data subject consent), and one concern is that with rapidly developed AI platforms, often there will be no proper consent. Businesses may also encounter problems if they try to rely on other processing grounds, such as legitimate interests, given the sheer scale and type of the processing and concerns over a number of data subject rights that may not easily be overridden by the interests of the business.

If the AI results show errors (as is alleged in the ChatGPT investigation), there will be concerns over the quality of the data (often scraped from the internet) used to train the technology. GDPR gives individuals rights to rectify errors (as does the California Privacy Rights Act (CPRA)). There are also concerns over data minimization requirements not being met under the GDPR, as well as security concerns.

The bottom line is this could be the tip of the iceberg as other enforcers take a closer look at AI models. More bans could follow the Italian one, and we may see AI developers having to delete huge data sets and retrain their bots. These breaches also carry the risk of very large fines.

As mentioned above, just as we are now seeing the start of an AI arms race, this GDPR enforcement action against ChatGPT could well be the tip of the iceberg as other enforcers take a closer look at AI models. AI providers, therefore, need to carefully consider and review their compliance position and possibly make some changes depending on their system history and setup. Users of such AI tools also need to be mindful of these legal risks.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Pillsbury Winthrop Shaw Pittman LLP | Attorney Advertising

Written by:

Pillsbury Winthrop Shaw Pittman LLP
Contact
more
less

Pillsbury Winthrop Shaw Pittman LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide