Solving for the EU’s Artificial Intelligence Act: Obligations of AI Users

Ankura
Contact

Ankura

In December 2023, European Union (EU) lawmakers reached an agreement on the EU AI Act. Our article titledAn Introduction to the EU AI Actfocused on applicability, thresholds, timing, and penalties related to the EU AI Act. In a follow-up article, we focused on the responsibilities of the providers of high-risk AI systems. In this next article, we focus on the responsibilities of users of AI as required by the EU AI Act.

Article 29 of the EU AI Act is titled “Obligations of the users of high-risk AI systems” and contains the following three requirements:

  1. User Oversight – Article 29 requires that users monitor the operation of the high-risk AI system and that if they have reason to believe that the AI system is presenting a risk to the health, safety, or fundamental rights of an individual, the user needs to inform the provider or distributor and suspend the use of the system. Users must also inform the provider or distributor of any serious incident or malfunctioning of the system where per Article 3, a user, provider, and distributor are defined as: 
    1. User - any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity;
    2. Provider - any natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge; and
    3. Distributor – any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the EU market without affecting its properties. 
  2. Maintaining Logs – Article 29 requires that users of high-risk AI systems keep logs automatically generated by that high-risk AI system, to the extent the logs are under their control. As described in our prior article, such logs must contain a) the start date and time and end date and time of each use, b) the reference database against which the input data has been checked by the system, c) the input data for which the search as led to a match and d) the identification of the individual(s) involved in the verification of the results. 
  3. Data Protection Impact Assessments (DPIA) – Article 29 requires that users of high-risk AI systems conduct data protection impact assessments under Article 35 of the General Data Protection Regulation (GDPR). 

It is worth highlighting that the scope of AI systems and activities under the EU AI Act is narrower than that of the GDPR. In our first article in this series, we reviewed that the EU AI Act is largely focused on requirements for providers of high-risk AI systems that involve:

  1. Biometric identification and categorization of natural persons;
  2. Management and operation of critical infrastructure;
  3. Education and vocational training;
  4. Employment, works management, and access to self-employment;
  5. AI systems intended to be used by public authorities to evaluate the eligibility of natural persons for public assistance;
  6. Law enforcement;
  7. Border control management; or
  8. Administration of justice and democratic processes.

On the other hand, GDPR Article 35 requires organizations to conduct DPIAs on high-risk processing activities that are defined by the European Data Protection Board (EDPB) as activities that involve profiling, automated decision-making, processing data on a large scale, matching or combing datasets, innovative use or application of technological solutions or where the processing itself prevents data subjects from exercising a right. This set of criteria for when to conduct a DPIA implicates almost all uses of AI systems. 

In summary, AI and data privacy experts need to keep in mind that a high-risk AI system as defined by the EU AI Act has a much higher threshold and more narrow definition than a high-risk processing activity as defined by the GDPR. Even if an organization is not a provider of AI systems, it will still need to conduct DPIAs on most, if not all, business processes that utilize AI.

Written by:

Ankura
Contact
more
less

Ankura on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide