AI and GDPR

Thomas Fox - Compliance Evangelist
Contact

Thomas Fox - Compliance Evangelist

 

Artificial Intelligence (AI) has revolutionized various industries, but with great power comes great responsibility. Regulators in the European Union (EU) are taking a proactive approach to address compliance and data protection issues surrounding AI and generative AI. Recent cases, such as Google’s AI tool, Bard, being temporarily suspended in the EU, have highlighted the urgent need for regulation in this rapidly evolving field. I recently had the opportunity to visit with GDPR maven Jonathan Armstrong on this topic. In this blog post, we will delve into our conversations about some of the key concerns raised about data and privacy in generative AI, the importance of transparency and consent, and the potential legal and financial implications for organizations that fail to address these concerns.

One of the key issues in the AI landscape is obtaining informed consent from users. The recent scrutiny faced by video conferencing platform Zoom serves as a stark reminder of the importance of transparency and consent practices. While there has been no official investigation into Zoom’s compliance with informed consent requirements, the company has retracted its initial statements and is likely considering how to obtain consent from users.

It is essential to recognize that obtaining consent extends not only to those who host a Zoom call but also to those who are invited to join the call. Unfortunately, there has been no on-screen warning about consent when using Zoom, leaving users in the dark about the data practices involved. This lack of transparency can lead to significant legal and financial penalties, as over 70% of GDPR fines involve a lack of transparency by the data controller.

Generative AI heavily relies on large pools of data for training, which raises concerns about copyright infringement and the processing of individuals’ data without consent. For instance, Zoom’s plan to use recorded Zoom calls to train AI tools may violate GDPR’s requirement of informed consent. Similarly, Getty Images has expressed concerns about its copyrighted images being used without consent to train AI models.

Websites often explicitly prohibit scraping data for training AI models, emphasizing the need for organizations to respect copyright laws and privacy regulations. Regulators are rightfully concerned about AI processing individuals’ data without consent or knowledge, as well as the potential for inaccurate data processing. Accuracy is a key principle of GDPR, and organizations using AI must conduct thorough data protection impact assessments to ensure compliance.

Several recent cases demonstrate the regulatory focus on AI compliance and transparency. In Italy, rideshare and food delivery applications faced investigations and suspensions for their AI practices. Spain has examined the use of AI in recruitment processes, highlighting the importance of transparency in the selection process. Google’s Bard case, similar to the Facebook dating case, faced temporary suspension in the EU due to the lack of a mandatory data protection impact assessment (DPIA).

 

It is concerning that many big tech providers fail to engage with regulators or produce the required DPIA for their AI applications. This lack of compliance and transparency poses significant risks for organizations, not just in terms of financial penalties but also potential litigation risks in the hiring process.

To navigate the compliance and data protection challenges posed by AI, organizations must prioritize transparency, fairness, and lawful processing of data. Conducting a data protection impact assessment is crucial, especially when AI is used in Know Your Customer (KYC), due diligence, and job application processes. If risks cannot be resolved or remediated internally, it is advisable to consult regulators and include timings for such consultations in project timelines.

For individuals, it is essential to be aware of the terms and conditions associated with AI applications. In the United States, informed consent is often buried within lengthy terms and conditions, leading to a lack of understanding and awareness. By being vigilant and informed, individuals can better protect their privacy and data rights.

As AI continues to transform industries, compliance and data protection must remain at the forefront of technological advancements. Regulators in the EU are actively addressing the challenges posed by AI and generative AI, emphasizing the need for transparency, consent, and compliance with GDPR obligations. Organizations and individuals must prioritize data protection impact assessments, engage with regulators when necessary, and stay informed about the terms and conditions associated with AI applications. By doing so, we can harness the power of AI while safeguarding our privacy and ensuring ethical practices in this rapidly evolving field.

 

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Thomas Fox - Compliance Evangelist | Attorney Advertising

Written by:

Thomas Fox - Compliance Evangelist
Contact
more
less

Thomas Fox - Compliance Evangelist on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide