An Introduction to the Basics of the EU AI Act

BakerHostetler

Following several years of drafts and negotiations, on March 13, the European Parliament approved the EU’s Artificial Intelligence Act (the AI Act), making it the world’s first comprehensive AI legislation. The AI Act stands as the central part of “a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI.” The AI Act aims to strike a balance between promoting AI innovation and ensuring the protection of the health, safety and fundamental rights of people.

At 459 pages in its most recent form (including recitals and annexes), the AI Act can be daunting to read, so we have provided references to relevant Articles of the AI Act throughout to help guide you.

Timing and Implementation

The AI Act will take effect 20 days after publication in the Official Journal of the European Union, with staggered implementation of key elements to follow until the law becomes fully applicable 2 years later. Publication in the Official Journal is anticipated in April of this year. Within 6 months of the effective date, use of AI systems that pose unacceptable risks must cease. Obligations for providers of general-purpose AI begin 12 months after the effective date. Most high-risk AI system requirements will be in effect 24 months following the effective date, with some types of high-risk AI systems (primarily those regulated by EU product safety laws) not falling within the scope of the AI Act until the 36-month mark.

Scope and Enforcement

Like the EU’s General Data Protection Regulation, the AI Act has broad extraterritoriality provisions that can pull organizations not established in the EU into the scope of the law (Article 2). The AI Act will apply to developers (“providers”) and users (“deployers”) of AI systems and general-purpose AI models marketed or used, including the use of outputs, within the European Union as well as others in the AI supply chain. Under the AI Act, an “AI system” is any machine-based system that can operate with some level of autonomy and that can, for some purpose, infer from inputs how to generate outputs that can affect physical or virtual environments.

A new European AI Office will primarily enforce the AI Act, which provides for fines of up to €35 million, or 7% of global annual turnover, for violations related to prohibited AI uses (€15 million, or 3% of global annual turnover, for most other violations). The AI Office will also work with the European Commission, European Artificial Intelligence Board and national authorities in the EU Member States to help manage and oversee compliance with regulatory obligations under the AI Act.

Taking a Risk-Based Approach

The AI Act relies on a risk-based approach to develop obligations and requirements, with the majority of the restrictions and requirements falling on AI systems used for unacceptable or high-risk purposes as well as general-purpose AI models. The list of types of AI uses that present unacceptable or high risk will be reviewed and amended periodically to account for changes in the technology. Additionally, AI developers (“providers”) will take on more obligations under the AI Act than others will, although a number of requirements also fall on those deploying AI systems in the EU.

Prohibited AI Systems. AI systems that pose unacceptable risks are prohibited (Article 5) and, if currently in use, must be phased out within 6 months of the AI Act’s effective date. These AI systems include those that are considered clear threats to the safety, livelihoods or fundamental rights of people in the EU. They include, with some exceptions, AI uses that:

  • Are intended to manipulate behaviors in ways that are likely to cause significant harm.
  • Infer emotions in the workplace or in educational settings.
  • Exploit vulnerable populations.
  • Evaluate or classify people based on their behaviors to develop social scores that result in unfavorable treatment or other detrimental effects.
  • Use bulk, untargeted scraping of facial images for facial recognition databases.
  • Predict the likelihood of committing a crime based solely on profiling.
  • Permit remote “real time” biometric identification in publicly accessible spaces for law enforcement purposes.

High-Risk AI Systems. High-risk AI systems can pose substantial risks to people but may be developed and used in compliance with the requirements of the AI Act. Among those AI uses that can result in high risk (Article 6) are:

  • Product safety, especially for regulated products, such as vehicles, toys and medical devices.
  • Critical infrastructure, such as digital infrastructure, traffic flows and utilities.
  • Education or vocational training, such as access to education, exam scoring and monitoring prohibited testing behaviors.
  • Employment and workforce management, including recruitment, applicant screening, promotion and termination.
  • Essential services and benefits, such as eligibility determinations, insurance pricing and emergency service dispatching.
  • Law enforcement and border controls, including evaluating the reliability of evidence and making decisions regarding migration or asylum.
  • Legal justice and democracy, such as when used for research and applying the law or influencing elections or voting behaviors.

The development and provision of high-risk AI systems is accompanied by a list of requirements, which include:

  • Established, implemented, documented and maintained AI risk management protocol throughout the entire life cycle of a high-risk AI system to identify, minimize and manage reasonably foreseeable risks (Article 9).
  • Documented data governance for high-risk AI systems that involve training AI models, including information about relevant design choices, data provenance, data quality, assumptions and potential biases (Article 10).
  • Thorough and up-to-date technical documentation (Article 11 and Annex IV).
  • Automatic activity logging to assess operation, identify risks and facilitate post-market monitoring (Article 12).
  • Transparency and clear information to deployers of high-risk AI systems, including accurate instructions for use (Article 13).
  • Human oversight measures to minimize risk (Article 14).
  • Accuracy and resiliency so that the high-risk AI system performs as expected and is resistant to unauthorized uses (Article 15).
  • A conformity assessment procedure to demonstrate compliance, which varies depending on the type of high-risk AI system (Articles 43-47) and other testing (Article 60).
  • Database registration (Articles 49 and 71).
  • Post-market monitoring (Article 72).
  • Notice of serious incidents or widespread infringement and investigation of the incident (Article 73).

Additional requirements for developers (“providers”) of high-risk AI systems are laid out in Articles 16 through 22 and include quality management systems, further documentation requirements, corrective actions for nonconforming high-risk AI systems and cooperation with competent authorities in the EU. Specific obligations on other organizations involved in the high-risk AI system supply chain follow in Articles 23 through 27. Deployers of high-risk AI systems, for example, must also implement appropriate technical and organizational measures to ensure compliance with instructions for the use of the AI system, assign competent individuals to provide human oversight, and monitor the operation of the high-risk AI system.

General-Purpose AI Models and Generative AI. Early drafts of the AI Act did not initially address “general purpose” AI models. But, given the rapid growth of generative AI, the drafters added provisions to address the risks of general-purpose AI models, which are defined as AI models able to perform many different tasks or of sufficient generality to permit the AI model to be integrated into various downstream applications or systems. Large generative AI models are a common example of a general-purpose AI model.

Providers of general-purpose AI models must publish a detailed summary of the content used for training their models, including text, pictures, video and other Internet data, and put in place a policy to comply with EU copyright law (Articles 53-55). High-impact general-purpose AI models that may pose systemic risks are subject to additional obligations, including risk assessments, model evaluations, testing and serious incident reporting.

Transparency is a primary obligation with respect to generative AI and other AI systems that interact directly with people (Article 50). People should be made aware when they are interacting with generative AI, especially where they may be influenced by the AI in some way. For instance, when using an AI-powered chatbot, people should be told so that they can make informed decisions about their interactions with the chatbot. People should also be put on notice when encountering AI-generated content, such as images, audio, text or video content that may be misleading, or “deep fakes.” As an example, AI-generated news articles must be labeled as artificially generated, unless subject to human-managed editorial processes prior to publication.

What’s Next?

The first step for many will be determining whether your organization is using AI or outputs from AI in the EU. If so, the AI Act will likely apply. Next, assess any AI systems used by your organization for possible prohibited or high-risk AI uses. If such a use is identified, prioritize phasing out the use of any prohibited AI system and develop a plan for bringing high-risk AI uses into compliance with the AI Act. Developers of general-purpose AI models should also prioritize those compliance obligations. Organizations with no prohibited or high-risk AI systems planned or in use should still consider the transparency requirements of the AI Act. Even when not specifically required, transparency has become an expected AI use best practice.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© BakerHostetler | Attorney Advertising

Written by:

BakerHostetler
Contact
more
less

BakerHostetler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide