The EU Reaches a Political Agreement on the AI Act

On December 8, representatives from the European Commission, the European Parliament, and the Council of the European Union (EU) reached political agreement on the shape and contents of the EU’s AI Act (the “Act”), setting the stage for the EU to implement the world’s first comprehensive AI law. As we have addressed in previous posts, the AI Act addresses AI risks to health, safety, and fundamental rights and takes a tiered risk-based approach to evaluating AI systems. 

The final text of the Act has not yet been released or agreed to by the member countries, but the European Commission (the “Commission”) has released an overview, which we have summarized below. A complete text of the Act may not be available until February of 2024, and it will go into effect two years after its final adoption. 

We will continue to monitor important updates regarding the EU AI Act.

The Act’s Application

The Act will apply to public and private actors inside and outside the EU if the AI system is placed in the EU market or affects people located in the EU. The Act will apply to AI developers as well as vendors that use, but did not themselves develop, AI systems. 

Risk-Based Approach

The Act takes a risk-based approach, assigning levels of risk for AI-systems. The risk classification structure is based on the intended purpose of the AI system and depends on the function performed by the AI system and on the specific modalities for which the system is used.

Minimal risk 

Under the Act, the vast majority of AI systems currently used or likely to be used in the EU are expected by the Commission to fall under this category. These AI systems can be developed and used subject to the existing legislation without additional legal obligations. 

High risk 

A limited number of AI systems that can create a harmful risk for people’s safety or fundamental rights are considered high-risk. An AI system will always be considered high-risk if it performs profiling of people.  

The Act will provide a definition of high-risk, as well as a methodology to identify high-risk AI systems. The Act also will provide in the Annex examples of high-risk use cases, including educational and vocational training, employment management, access to essential services, and certain law enforcement uses. 

Before placing a high-risk AI system on the EU market, providers will be required to perform a conformity assessment to demonstrate that their system complies with the mandatory requirements for trustworthy AI. A third-party conformity assessment will always be required for biometric systems. After high-risk AI systems are placed on the market, providers will be required to implement quality and risk management systems to ensure compliance with new requirements. 

High-risk AI systems that are deployed by public entities will need to be registered in a public EU database unless being used for law enforcement and migration. 

Unacceptable risk 

A very limited number of AI systems will be banned because they are deemed to contravene EU values. These uses include:

  • Social scoring, 
  • Exploitation of people’s vulnerabilities,
  • Biometric categorization to make inferences about certain categories (including race, political ideology, religious beliefs, and sexual orientation),
  • Individual predictive policing,
  • Emotion recognition in the workplace and education institutions, and
  • Untargeted scraping of internet of CCTV for facial images to expand databases.

Transparency risk 

The Act will impose on some AI systems specific transparency requirements that make users aware they are interacting with a machine. These include the use of chatbots. 

Systemic risks and general-purpose AI models

The EU considers systemic risks to arise from general-purpose AI models, including large generative AI models, that could carry systemic risks because they are widely uses and/or highly capable. The Act considers general-purpose models trained using a total computing power of more than 1025 FLOPs to carry systemic risks. 

Providers of general-purpose AI models will be required to disclose certain information to downstream system providers and will need to have policies in place to ensure compliance with copyright law when training their models. 

Regulation of Biometric Identification

The Commission recognizes the risk of false acceptance rates in biometric systems and therefore the Act prohibits real-time remote biometric identification in publicly accessible spaces for law enforcement purposes unless used in law enforcement for numerous outlined serious crimes, for the targeted search for specific victims, or for the prevention of threat to the life or physical safety of persons.

Under the Act, real-time remote biometric identification by law enforcement authorities will be subject to prior authorization by a judicial or independent administrative authority. This authorization would need to be preceded by a fundamental rights impact assessment. Use of AI systems for post remote biometric identification requires prior authorization by a judicial or independent administrative authority. 

Protection of Fundamental Rights

The Act anticipates that certain black box uses of AI might pose a threat to fundamental rights and lead to discrimination. Therefore, the Act incorporates accountability and transparency requirements for the use of high-risk AI systems.  

The Act also requires that deployers who are bodies governed by public law or private operators providing public services providing high-risk systems to conduct a fundamental rights impact assessment. Such an assessment includes a detailed description of how the system might impact fundamental rights, including of:

  • the deployer’s processes in which the high-risk AI system will be used,
  • the period of time and frequency in which the high-risk system is intended to be used,
  • the categories of people and groups likely to be affected by its use in the specific context,
  • the specific risks of harm likely to impact the affected categories of people or a group, and
  • a description of the implementation of human oversight measures and of measures to be taken in case of the materialization of the risks.

Gender and Racial Bias

Recognizing that AI systems can perpetrate already existing biases and structural discrimination, the Act requires that high-risk systems be technically robust to ensure that technology is fit for purpose and that false results do not disproportionately affect protected groups. The Act further requires high-risk systems to be trained and tested with sufficiently representative datasets to reduce risk of unfair biases being embedded in the model. High-risk systems must also be traceable and auditable, including preserving documentation of the data used to train the algorithm that could be used in investigations.

Enforcement

After adoption by the European Parliament and the Council, the Act will be fully applicable 24 months after entry into force, with a graduated approach that includes Member States gradually phasing out prohibited systems. 

Each Member State will be required to designate at least one national authority to supervise the application and implementation of the Act.  Each Member State will also designate one national supervisory authority to represent the State in the European Artificial Intelligence Board.  This Board will help facilitate a smooth and harmonized implementation of the new AI regulation and will provide recommendations to the Commission regarding high-risk AI systems. 

The Commission will also establish a new European AI Office (the “Office”) with a mission to develop Union expertise and capabilities in the AI field. The Office will enforce and supervise the new rules for general purpose AI models and ensure coordination regarding AI policy between involved Union parties. 

The Act sets forth penalties for infringement and non-compliance not to exceed 35 million Euros or 7% of the total worldwide annual turnover of the preceding financial year, whichever is higher. 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© WilmerHale | Attorney Advertising

Written by:

WilmerHale
Contact
more
less

WilmerHale on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide