Companies Prepare for Change as EU Legislators Agree on EU Artificial Intelligence Act

Alston & Bird
Contact

Alston & Bird

On December 8, 2023, following marathon negotiations, European Union (‘EU’) legislators reached a political agreement on the much-anticipated EU Artificial Intelligence Act (‘AI Act’). The AI Act is billed as the first comprehensive legal framework on AI systems worldwide, and will impose obligations on both private and public sector actors which develop, import, distribute, or use in-scope AI systems.

At its core, the AI Act will maintain the risk-based approach which was the focus of the European Commission’s original proposal in 2021. According to that approach, the obligations which apply in connection with an AI system depend on which ‘tier’ the AI system falls within:

  • Minimal/limited risk AI systems will be exempt from the majority of the provisions in the AI Act, as these systems present only minimal (or no) risk to the safety or rights of individuals.
  • High-risk AI systems will be the subject of a plethora of strict and wide-ranging obligations, including in connection with risk mitigation, the quality of data sets, and activity logging, as well as requirements to provide for clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Examples of ‘high-risk’ AI systems include those used in connection with recruitment and other HR-related purposes; medical devices; and particular critical infrastructure (such as water, gas and electricity networks). AI systems used for biometric identification, categorization and emotion recognition systems are also considered to be high-risk.
  • Unacceptable risk AI systems will be prohibited (albeit with some narrow exceptions). These include certain AI systems used to manipulate human behavior, or AI systems used for ‘social scoring’ purposes. In addition, some uses of biometric AI systems will be prohibited, for example emotion recognition systems used in the workplace.

However, the political agreement goes further than the original European Commission proposal. For example, the AI Act will also provide for rules in relation to:

  • AI systems presenting specific transparency risks, such as chatbots and AI systems generating content. These will be subject to certain transparency obligations – for example, obligations to make users aware they are interacting with a machine, and to disclose that content is AI generated.
  • General purpose AI / foundation models. The political agreement provides that general purpose AI systems and models must comply with specific obligations before they are placed on the market, including transparency obligations. A stricter regime will also be applied to certain ‘high impact’ models, which must – amongst other things – be subjected to model evaluations and adversarial testing.

The political agreement also cemented the AI Act’s approach to hotly debated topics, such as the very definition of ‘AI System’ (now aligned with the OECD’s approach) and the requirement imposed on deployers of certain high-risk AI systems to conduct so-called ‘fundamental rights impact assessments’ (‘FRIAs’).

What will happen next?

Whilst a political agreement has been reached, technical work will continue in the following weeks to finalize details of the AI Act’s wording – and so companies should keep on the look-out for the upcoming full-and-final text. Once that finalized text has been fully adopted by EU legislators, there will be a two-year period of transition before the AI Act becomes fully effective (with some exceptions).

In the meantime, companies may want to assess, for example:

  • To what extent are they already developing, selling or using AI systems which could fall within the scope of the AI Act;
  • Which of the AI Act’s obligations are likely to apply in relation to those AI systems; and
  • How the company’s existing practices stack up against those requirements.

Finally, companies with links to the UK should also keep a look out for updates from the UK government. The AI Act will not form part of the UK’s laws (following the UK’s exit from the EU) – but earlier this year the UK government proposed its own (principles-based) framework for AI regulation, which it submitted to public consultation. That consultation has now closed – and the UK government has indicated it will publish its response before the end of 2023.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Alston & Bird | Attorney Advertising

Written by:

Alston & Bird
Contact
more
less

Alston & Bird on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide