The EU AI Act: Steps to Take in the First 6 Months

Orrick, Herrington & Sutcliffe LLP
Contact

Orrick, Herrington & Sutcliffe LLP

The European AI Act has passed, and its first provisions will take effect this October. The rest of the Act will take effect in a staggered manner, with the bulk of the provisions being enforceable by May 2026.

Here are six actions organizations can take to start to meet their compliance obligations (not necessarily in sequential order):

  1. Identify the AI systems and general purpose AI models your organization is using or developing.
  2. Clarify your organization’s role in relation to each AI system and general purpose AI models (GPAIMs).
  3. Determine whether the AI systems and GPAIMs are in scope of the AI Act.
  4. Classify the AI systems and general purpose AI models based on the AI Act risk categories.
  5. Start incorporating AI Act requirements into your contracts and due diligence processes.
  6. Put in place an AI governance framework.

1. Identify the AI systems and general purpose AI models your organization is using or developing.

The AI Act regulates AI systems and general purpose AI models (GPAIMs). As a first step, organizations should therefore undertake an “AI mapping” exercise to determine whether they are developing, using, importing or distributing AI systems and/or developing GPAIMs, and in what capacity (see Action #2).

The AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

According to the act, a “key characteristic” of an AI system is its “ability to infer” – to generate outputs. This differentiates AI systems from other human-defined, rules-based systems that automatically execute operations, which are beyond the scope of the regulation. The mapping exercise should focus on the functionality of products identified as “AI.”

Since the AI Act also imposes obligations on GPAIM developers, companies that develop their own AI models should evaluate whether they are covered. 

The AI Act defines a GPAIM as a model “trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream applications, except AI models that are used for research, development or prototyping activities before they are released to market.”

Examples include models in generative AI systems that create text, images, code and/or audio. 

AI models are an integral part of AI systems but do not, on their own, constitute such systems – some kind of user interface is required. An AI system based on a GPAIM is defined as a general purpose AI system, a distinction that may be relevant for compliance (see Action #3).

In many cases, mapping the AI systems and models an organization is using or developing should be straightforward. We recommend adopting a department-by-department approach, since this will help identify less visible use cases. For instance, a growing number of free AI systems based on GPAIMs are accessible online via APIs and mobile apps. Using such systems can give rise to disclosure obligations under the act. If the organization does not identify the use, it will be unable to meet its obligations.

2. Clarify your organization’s role in relation to each AI system and general purpose AI models.

We recommend determining the role your organization plays in relation to each AI system and model it is using or developing. 

The AI Act applies to all operators on the AI value chain: providers, deployers, importers, distributors and product manufacturers. The act imposes obligations that vary depending on the designation.

Providers of high-risk AI systems have the most obligations under the AI Act, followed by deployers. Importers and distributors have more limited responsibilities. Only providers have obligations in relation to GPAIMs.

Determining the role(s) of your organization and resulting responsibilities can be tricky, such as when AI products are co-developed or additional developments are made to an existing AI product. In such cases it may be necessary to review and amend existing agreements (see Action #5).

3. Determine whether the AI systems and GPAIMs are in scope of the AI Act.

For the AI Act to apply to a given AI system or GPAIM, the AI system has to be “placed on the market” or “put into service” in the European Union, by a provider, and GPAIMs must also be “placed on the market” by a provider. These concepts, derived from European product safety law, have their own definitions in the AI Act. Companies should assess their activities in light of those definitions. Notably, the law can apply to a provider that is not established in the European Union.

The AI Act will also apply to providers and deployers of AI systems that are not established in the European Union if the outputs produced are used in the EU.

The AI Act also lists a number of cases in which the law will not apply, including: 

  • If AI systems are placed on the market, put into service or used exclusively for military or defense or national security purposes.
  • If AI systems are developed or put into service solely for scientific research and development.

All AI operators with any connection to the European market should determine whether their AI products or their use of AI products are within the scope of the law.

4. Classify the AI systems and general purpose AI models based on the AI Act risk categories.

The obligations the act imposes in relation to AI systems and GPAIMs will depend on the risk classification attributed to each. For AI systems identified during the mapping process, organizations should determine whether any would be potentially prohibited or high risk under the AI Act. If not, these systems may still be subject to more restricted obligations under the act. They also may be subject to the General Product Safety Regulation (EU) 2023/988, which was enacted on 12 June 2023 and will be applicable from 13 December 2024.

In addition, providers of GPAIMs with “systemic risk” are subject to obligations beyond those attached to GPAIMs without systemic risk. The AI Act identifies the requirements, which if satisfied will result in a “systemic risk” classification. 

Once the risk classification of AI systems and AI models is made and the role taken by the organization in relation to each is clear, an organization can prepare a detailed compliance road map (Activity # 6).

5. Start incorporating AI Act requirements into your contracts and due diligence processes.

Many agreements and acquisitions covering AI products that are being negotiated now will be in force or relevant when the AI Act takes full effect. Organizations should implement changes to contract terms, due diligence and procurement processes to anticipate AI Act risk and meet compliance requirements.

Organizations may also identify agreements that have already been executed that they will need to amend.

6. Put in place an AI governance framework.

AI governance refers to the internal policies and processes an organization adopts and implements to ensure its use and development of AI technologies aligns with its mission, risk profile, legal obligations and ethical priorities. The action points described on this list would form part of an AI governance strategy.

The most effective AI governance framework will be supported at the highest level of the organization. People with diverse profiles from different parts of the organization should oversee the framework—for instance in the form of an internal AI working group. 

The governance framework will also help align AI-related compliance initiatives overlapping compliance duties, such as those related to data protection and cybersecurity.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Orrick, Herrington & Sutcliffe LLP | Attorney Advertising

Written by:

Orrick, Herrington & Sutcliffe LLP
Contact
more
less

Orrick, Herrington & Sutcliffe LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide