DSIR Deeper Dive: Absent Legislation, Privacy Regulators Offer Guidance on AI

BakerHostetler
Contact

BakerHostetler

By now, many of us are using AI, advising others about how to use AI, and waiting for some legislative miracle to give us some guardrails for what we can or cannot be doing with AI. A lot of effort has been put into tracking the current legislative proposals, so we are not going to do that here. Perhaps more interesting are the actions data protection authorities (DPAs) worldwide are already taking with respect to AI and what those actions tell us about where they are concentrating their efforts and attention. These actions broadly encompass public guidance and enforcement under existing laws, and we have seen several themes emerging in 2023 – transparency, fairness and accountability among them. The common thread remains, however, that privacy and data protection are a valuable starting point for the responsible use of AI.

An interesting aspect of many AI legislative proposals currently under consideration is how they intersect with existing privacy and data protection laws. AI legislation will not replace privacy and data protection laws; rather, the goal is to regulate the risk of the technology itself. Privacy and data protection laws will continue to address the risks to personal data, as emphasized by DPAs. In late September, the Office of the Privacy Commissioner of New Zealand published AI guidance explaining how the use of AI relates to each of the existing Information Privacy Principles under New Zealand’s Privacy Act. South Korea’s Personal Information Protection Commission has issued similar guidance to companies subject to the Personal Information Protection Act, focusing on how the Act’s processing principles intersect with each phase of AI development. Saudi Arabia’s AI Ethics Principles (Version 2.0) place comparable emphasis on the AI system life cycle. Meanwhile, the Office of the Privacy Commissioner of Canada has delved into the concept of algorithmic fairness and how fairness can be accomplished in each phase of AI development.

To facilitate compliance (and to help grow the economy), the United Kingdom’s Department for Science, Innovation and Technology is launching an advisory service in 2024 to help businesses bring AI innovations to market more quickly and also responsibly, and its Information Commissioner’s Office is providing a regulatory sandbox and other advice. Similarly, Argentina is creating a program to support public and private sector development and use of AI while balancing the privacy rights of individuals. The French DPA launched applications for its own AI sandbox in July and has stated that it will be paying particular attention to those who are developing, training and deploying AI tools to make sure they have conducted data protection impact assessments, have informed users adequately and are permitting people to exercise privacy rights effectively.

The Office of the Australian Information Commissioner’s 2023 survey of Australians’ attitudes toward privacy specifically asked questions about AI. The survey found that 96% of Australians want conditions in place before AI is used to make decisions that affect them, and most believe it is essential that people are told when AI is being used and that they have a right to request human review or challenge a decision made by AI. Transparency – explaining when and how AI is being used – is a common element of regulatory guidance regarding AI tools. The Spanish DPA recently explained the distinction between “transparency” under the EU’s General Data Protection Regulation (GDPR) and the use of the same term in the proposed EU AI Act. Under the GDPR, “transparency” means that any data controller needs to inform people about the potential effects of any personal data processing. Under the proposed AI Act, transparency speaks to the development of explainable, traceable and auditable AI systems. To assist, the Norwegian DPA published guidance on effective AI transparency that concludes with a reminder to consider who is being informed, how the notice is being received, what needs to be explained and where the explanation should be made available.

At the end of September, the European Data Protection Supervisor remarked on the “duality of AI” in a blog post noting AI’s capacity to enhance cybersecurity while simultaneously creating more opportunities for cybercriminals to deceive and exploit the unwary. The concept of educating the public is repeated throughout regulatory guidance. DPAs seem to acknowledge that enforcement alone does not solve the problem without the concurrent education of AI users. The Canadian Centre for Cyber Security, for example, warned generative AI users to verify content, including fact checking AI outputs against other known credible sources. Japan’s Personal Information Protection Commission published guidelines for generative AI users aimed at explaining that data input into generative AI tools may be used to train the AI and that using these tools can result in an inadvertent violation of law.

DPAs have also been part of larger conversations about data, data sets and how companies are getting and using mass quantities of data for training AI systems. The Dutch DPA has warned that the inputs people use in generative AI tools may themselves contain sensitive personal information, such as when someone asks for advice about medical concerns or relationship troubles. In July, the Norwegian Consumer Council issued a 75-page report on the consumer harms of generative AI, including those harms that can result from data scraping practices. Discussing data scraping, the report states that increased public awareness about how AI models are trained could “create a chilling effect,” concluding that abandoning the use of social media, for example, should not be the only viable choice for people who do not want their data used for AI model training. Although not explicitly related to AI uses, a multinational group of 12 DPAs issued a joint statement in August condemning illegal data scraping practices and admonishing companies to do more to protect personal information from scraping. Meanwhile, the Spanish DPA’s Innovation and Technology Division has examined the effect of inaccurate data on the development and performance of algorithms, highlighting the need to implement safeguards to protect against inaccuracies in the data used for training AI and designing AI models.

When nudging AI in the right direction proves ineffective, DPAs worldwide can exercise other options. Key players in generative AI development have been obvious targets this year, receiving everything from administrative guidance from Japan’s Personal Information Protection Commission to investigations and requests for information around the globe to temporary bans in Italy. But what are the real concerns here? They align primarily with the following questions.

  • Has accurate and transparent information been provided?
  • Has valid and meaningful consent been obtained (if needed)?
  • What is the impact on minors and other vulnerable populations?
  • Is the AI deceptive or manipulative? Is it spreading disinformation?
  • Is the processing of personal data appropriate? Is the processing necessary, purpose limited and done for an acceptable purpose?
  • Do people have the ability to exercise their privacy rights?
  • Have appropriate safeguards been implemented to mitigate harm?

Transparency has been an important throughline for enforcement actions related to automated decision-making under the GDPR, which requires meaningful information about the logic involved. One recent enforcement action relates to an automated credit decision made by a bank in Berlin, Germany, where a customer with good credit and a stable income who was automatically rejected filed a complaint with the Berlin DPA. In its decision, the Berlin DPA stated that, when using automated decision-making, the bank must provide individualized information about the reasons for the rejection. That said, an Austrian court found that there are limitations to the required transparency, holding that a company was not required to disclose, for example, its mathematical or other weighting or calculation methods. Similarly, the Irish DPA in its recently published case studies explains that access requests do not necessarily implicate all information, permitting a bank to withhold its credit scoring models, which were not deemed to be personal data.

DPAs continue to show particular interest in how AI affects minors and other vulnerable populations, who may not be as aware of the risks to their personal data. In September, the Dutch DPA requested information from a technology company about an AI-powered chatbot integrated into a popular children’s app, asking specifically whether children are (or can be) appropriately informed of the associated risks and how long data is retained. The Italian DPA required that an age verification option be added to an app using a virtual friend chatbot and to a popular generative AI tool. The Italian DPA also required that the generative AI tool implement a mechanism to permit the effective exercise of privacy rights and provide clearer information to users.

Concern about the use of AI to spread disinformation has been noted in recent regulatory reporting. One report from the Dutch DPA highlights an increase in the spread of disinformation with the advent of generative AI. Another example from the German Federal Commissioner for Data Protection and Freedom of Information calls out the ways in which generative AI can more efficiently scale persuasive disinformation, while the report discussed above from the Norwegian Consumer Council highlights issues related to the creation of “new” personal data by AI tools and the proliferation of deepfakes.

And, in case we did not already have enough to worry about, the Bavarian DPA pointed out in July that autocorrect and other writing and editing tools in our web browsers are often using AI that may be siphoning our personal data and advised public bodies to deactivate these features. Honestly, many people probably do not care if their spelling errors are training AI somewhere in the world, but the main point that DPAs seem to be reiterating is that people need to be given the choice. The default option when it comes to AI should be the most protective, with all other options adequately explained. People should be able to clearly understand what is happening to personal data they put into AI and how to evaluate the reliability of outputs. While we wait for AI legislation, let’s not lose sight of the privacy and data protection laws that do apply and the ways in which they can already be used to help promote privacy-forward AI.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© BakerHostetler | Attorney Advertising

Written by:

BakerHostetler
Contact
more
less

BakerHostetler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide