Compromise Reached on EU AI Act: A Snapshot of What It Means for AI Use in the EU

Morgan Lewis

After lengthy negotiations, representatives of the EU Council, European Parliament, and European Commission have reached a compromise in principle on rules for the use of artificial intelligence (AI), ushering in new safeguards, consumer rights, product liability, and fines, among many other components.

The European Parliament’s press release references the following changes found in the agreement:

  • Safeguards on general-purpose artificial intelligence (GPAI)
  • Limitations on the use of biometric identification systems by law enforcement agencies
  • Bans on using social scoring and AI to manipulate or exploit user vulnerabilities
  • Rights of consumers to launch complaints and receive meaningful explanations of AI decisions that impact their rights
  • Fines ranging from €7.5 million, or 1.5% of turnover, to €35 million, or 7% of global turnover

Importantly, however, the final text of the AI Act has not been agreed yet. Notably, many of the key details of how the AI Act will, in practice, accomplish the outcomes set out above remain to be finalized during the first quarter of 2024, as we discuss in more detail below.

BANNED APPLICATIONS

Recognizing the potential threat to citizens’ rights and democracy posed by certain applications of AI, the negotiators agreed to prohibit

  • biometric categorization systems that use certain sensitive characteristics;
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behavior or personal characteristics;
  • AI systems that manipulate human behavior to circumvent free will; and
  • AI used to exploit the vulnerabilities of people.

VARIOUS REQUIREMENTS FOR ‘HIGH-RISK’ AI

For so-called high-risk AI systems, the negotiators agreed to include a mandatory fundamental rights impact assessment, among other requirements. AI systems in this category include the insurance and banking sectors as well as AI systems used to influence the outcome of elections and voter behavior. They also agreed on a complaint mechanism for individuals “to receive explanations about decisions based on high-risk AI systems that impact their rights.”

TWO-TIER REGULATION FOR GENERAL-PURPOSE AI

For GPAI, there will be two levels of regulation: high-impact and low-impact systems. For high-impact GPAI models with systemic risk, there will be very stringent obligations. If these models meet certain criteria, providers will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the European Commission, ensure cybersecurity measures are being put in place, and report on the energy efficiency of AI systems.

All GPAI model providers must adhere to transparency requirements such as technical documentation and dissemination of relevant information (e.g., summaries of training content) for downstream operators of high-risk applications.

OUR ASSESSMENT

It appears that the European Parliament achieved most of what it wanted. However, there are only limited circumstances under which biometric identification systems may be used in publicly accessible spaces, i.e., where required for law enforcement purposes provided that this is based on a court order and the systems are being used for preventing serious crimes. Moreover, real-time remote biometric identification (RBI) systems will need to comply with strict conditions. Countries such as France have expressed particular interest in these RBI tools (e.g., to ensure the safety of the 2024 Olympic Games in France).

It will take time for the drafters to iron out the technical details and create a viable legal draft that the EU Council and Parliament can vote on before the EU elections. The ambitious plan is that the AI Act will be enacted in early spring of 2024, with a two-year grace period for compliance. Prohibited systems will have a shorter, six-month period to comply. High-risk AI models are also subject to a 12-month period for compliance with the transparency and governance requirements.

We expect the outcome of this review to produce a very complex legal text, which is unsurprising given the complexity of the concepts it is required to tackle. Details might change as the text is fine-tuned at the technical level in the coming weeks. Businesses and legal experts within and outside the EU will likely read the forthcoming texts thoroughly to assess whether they contain provisions or mechanisms that will allow these businesses to avoid their AI models being qualified as a high-risk AI or high-impact GPAI models. For businesses whose use of AI does not fall within either of these two categories, the compliance risk should be relatively manageable.

Of note is the AI Act’s definition of AI, which follows the updated OECD standard: “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The AI Act will have an extraterritorial effect similar to the EU General Data Protection Regulation (GDPR). Noncompliance with the AI Act could lead to fines at a higher level than even GDPR fines—ranging from €7.5 million (or 1.5% of global turnover) to €35 million (or 7% of global turnover) depending on the infringement and size of the company. For this reason alone, it will be prudent for developers and users of AI to observe the development and seek compliance with the new European rules at an early stage.

Many open questions remain at this time, including the following:

  • Will the technical and legal restrictions proposed by the AI Act slow down innovation?
  • Will countries outside of the EU follow the same principles?
  • Will businesses adhere to the terms of the AI Act from the start?
  • What is required for a “fundamental rights assessment” and how will this be tested?
  • What exactly is required for AI training?
  • How will the AI Act be enforced at a member state level?

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Morgan Lewis | Attorney Advertising

Written by:

Morgan Lewis
Contact
more
less

Morgan Lewis on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide