The New EU Approach to the Regulation of Artificial Intelligence

Orrick, Herrington & Sutcliffe LLP

European Commission publishes communication and proposal for a Regulation on Artificial Intelligence

Introduction

The European Commission (the "Commission") recently published its highly-anticipated communication and proposal for a "Regulation laying down harmonised rules on artificial intelligence"[1] (the "AI Regulation"). The AI Regulation is the first ever legal framework, globally, focused solely on artificial intelligence ("AI") and has striking similarities to the GDPR. If adopted as drafted, the AI Regulation would have significant consequences for many organisations who develop, sell or use AI systems, including the introduction of a new set of legal obligations and a monitoring and enforcement regime with hefty penalties for non-compliance.

At its heart, the AI Regulation is focused on the identification and monitoring of "high risk" AI systems and the key questions for organisations who develop, sell or use AI will be whether the AI system in question is likely to be considered "high risk" and what this means for those "high-risk" AI systems if the AI Regulation is adopted, as drafted.

This article concentrates on the key aspects of the AI Regulation and the implications for organisations that provide AI systems that have some degree of nexus with the European Union ("EU").

What does the AI Regulation do?

The AI Regulation governs the "development, placement on the market and use of AI systems in the [EU] following a proportionate risk-based approach"[2]. As a Regulation, it will introduce a "uniform application of the new rules… the prohibition of certain harmful AI-enabled practices and the classification of certain AI systems"[3], which will have direct effect in all EU Member States. The AI Regulation applies across all sectors (public and private) to "ensure a level playing field"[4].

As an EU regulation, it has immediate effect and does not need further implementation by the EU Members States. A violation of the AI Regulation can potentially even lead to civil claims of individuals under Member State law.

What constitutes AI under the AI Regulation?

The AI Regulation defines AI as "software that is developed with one or more of the techniques and approaches listed in Annex I [of the AI Regulation] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with"[5].

Recognising the pace of technological development, the EU has attempted to make the definition "as technology neutral and future proof as possible"[6]. Accordingly, Annex I can be "adapted by the Commission in line with new technological developments"[7].

Does the AI Regulation have extraterritorial effect?

Like the GDPR, the AI Regulation is intended to have extraterritorial effect. Subject to some specific exceptions, the AI Regulation applies to:

  • Providers placing on the market or putting into service AI systems in the EU (regardless of where the providers are located);
  • Users of AI systems located within the EU; and
  • Providers and users of AI systems that are located outside the EU, where the output is used in the EU.

Proportionate and risk-based approach

The AI Regulation introduces a four-tier system of risk:

  • Prohibited AI ("unacceptable risk"): A limited set of AI uses are banned under the AI Regulation as they are deemed to violate fundamental EU rights.

    Examples include: (i) Subliminal techniques beyond an individual's consciousness in order to materially distort their behaviour; (ii) exploiting the vulnerabilities of a specific group of individuals due to their age; (iii) social scoring by public authorities; and (iv) "real-time" remote biometric identification systems in publicly accessible spaces used for law enforcement purposes (subject to limited exceptions).

  • Highly Regulated AI ("high-risk"): An AI system will be "high-risk" if it creates a high risk to the health and safety or fundamental rights of natural persons. For example, in line with existing product safety legislation, AI used as a safety component of a product (or which is, itself, such a product) will likely qualify as "high risk" under the AI Regulation. Other "high-risk" AI systems are set out at Annex III of the AI Regulation, which the Commission can review in order to align them with the evolution of AI use cases (i.e. future-proofing).

    Examples include: (i) "Real-time" and "post" remote biometric identification; (ii) evaluating an individuals' creditworthiness (except where used by small scale providers for their own use); and (iii) the use of AI systems in recruitment and promotion (including changes to roles and responsibilities) in an employment context.

    "High-risk" AI system requirements and obligations

    Chapter 2 of Title III sets out detailed "requirements" for "high-risk" AI systems. Chapter 3 of Title III sets out specific "obligations" on providers, users and other participants across the AI value chain (e.g. importers and distributors).

    Providers[8] are responsible for the majority of the specific obligations in relation to "high-risk" AI systems including:

    • Establishing, implementing, documenting and maintaining a risk management system;
    • Data quality, management and governance;
    • Drawing-up technical documentation and automatic recording of logs;
    • Transparency and providing information to users;
    • Effective human oversight of the AI system; and
    • Designing and developing the AI system to achieve accuracy, robustness and security.

    Additional responsibilities of providers, in relation to "high-risk" AI systems, include:

    • Ensuring that the AI system undergoes a conformity assessment procedure, before being used/sold;
    • Registering in an EU database and affixing the CE marking before being used/sold;
    • Taking immediate corrective actions, if the AI system does not comply with the AI Regulation and informing the national competent authority of this;
    • Immediately informing the national competent authority of any "serious incident" or "malfunctioning" of the "high-risk" AI system, once a reasonable likelihood of a link between the AI system and the "serious incident" or "malfunctioning" has been established. In any event, a notification must be made no later than 15 days of having become aware of the "serious incident" or "malfunctioning". This is similar to the obligation to notify personal data breaches to regulators under the GDPR, though provides for a longer window than the 72 hours under the GDPR. Where personal data is involved, the provider is likely to be subject to reporting obligations under both the AI Regulation and the GDPR. However, there is no equivalent provision in the AI Regulation for notifying individuals;
    • Upon the request of a national competent authority, demonstrating the conformity of the AI system; and
    • Where an importer cannot be identified and where the provider is established outside of the EU, appointing an authorised representative which is established in the EU, prior to making the AI system available on the market.

    Obligations on other parties in relation to "high risk" AI systems

    Chapter 3 of Title III establishes specific obligations for importers which are covered at Article 26, distributors at Article 27, and users at Article 29. Other obligations, which broadly covers "distributors, importers, users or any other third-party" can be located at Article 28. These parties will assume the same, extensive, obligations as providers in relation to "high-risk" AI systems, if they:

    • Place on the market or put into service an AI system under their name or trademark (thus capturing white-labelled AI systems);
    • Modify the intended purpose of an AI system already placed on the market or put into service; or
    • Make a substantial modification to the AI system.

    Notifying authorities and conformity assessments

    Under Chapter 4 of Title III, Member States are obliged to establish a "notifying authority", responsible for the assessment, designation and notification of "conformity assessment bodies", which carry out independent assessment activities (testing, certification and inspection) of "high-risk" AI systems.

    Chapter 5 of Title III sets out the "high-risk" AI system conformity assessment regime under the AI Regulation.

  • Softly Regulated AI "Limited risk": Title IV of the AI Regulation provides for transparency obligations in relation to certain other AI systems:
    • When persons interact with an AI system (e.g. a website chatbot) or their emotions or characteristics are recognised through automated means, people must be informed that this is happening; and
    • If an AI system is used to generate or manipulate image, audio or video content that resembles authentic content ('deep fakes'), there is an obligation to disclose to individuals that the content is generated through automated means (subject to limited exceptions).
  • Other AI "Minimal risk": All other AI systems can be developed and used subject to the existing legislation (including GDPR) without additional legal obligations under the AI Regulation.

Sanctions

At Article 71, the AI Regulation provides for a GDPR-like sanction regime for non-compliance. The percentages are based upon a company's total worldwide annual turnover of the preceding financial year:

  • Up to €30m or 6% (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data;
  • Up to €20m or 4% (whichever is higher) for non-compliance with any of the other requirements or obligations of the AI Regulation;
  • Up to €10m or 2% (whichever is higher) for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.

Notably, the AI Regulation does not provide for a specific right to compensation (i.e. an equivalent of Article 82 GDPR), which may provide some comfort. Of course, this does not exempt an AI system caught by the AI Regulation from the GDPR where a private right of action remains under Article 82 GDPR.

Although the AI Regulation does not provide for a specific right to compensation, a violation of the the AI Regulation (because it is an EU regulation, rather than a directive) can potentially even lead to civil claims of individuals under Member State law.

Enforcement

Each Member State must designate at least one national competent authority to supervise the AI Regulation's application and implementation and carry out market surveillance activities. It is likely that these powers will be designated to existing regulatory bodies such as data protection authorities.

European Artificial Intelligence Board ("EAIB")

Like the GDPR, the AI Regulation would see the establishment of an 'overarching' board to facilitate a smooth, effective and harmonised implementation of the new rules (the AI equivalent of the European Data Protection Board). The EAIB would consist of representatives of national competent authorities, the European Data Protection Supervisor, and the Commission.

What next?

To echo the comments of the Commission's Executive Vice-President, Margrethe Vestager, the AI Regulation is nothing short of "a landmark proposal". As drafted, the AI Regulation contains extensive regulatory compliance implications for organisations across a wide range of sectors.

As for next steps, the European Parliament and the Member States will look to adopt the Commission's proposals in the ordinary legislative procedure. During that time, the proposal is likely to be subject to extensive scrutiny and amendment. Once adopted, the final AI Regulation will be directly applicable across the EU. The AI Regulation includes a two-year period for application following adoption of the final regulation, which means that the new law could apply as early as 2024.

Although the AI Regulation is currently in draft form it is sensible for AI providers and other participants in the AI value chain (particular those who may fall into the "high risk" category) to acquaint themselves with the proposed requirements of the AI Regulation as, based on the political "mood music" it is likely that a similar regulation of AI is on the horizon.

The GDPR is well-known for spearheading the global privacy "revolution". Time will tell whether the AI Regulation, which draws clear influence and inspiration from the GDPR, serves as the catalyst for a new dawn of international AI regulation - we suspect that it will.


[1] Explanatory Memorandum to the proposal (page 1).

[2] Explanatory Memorandum to the proposal (page 3).

[3] Explanatory Memorandum to the proposal (page 7).

[4] Explanatory Memorandum to the proposal (page 6).

[5] Article 3(1) of the AI Regulation.

[6] Explanatory Memorandum to the proposal (page 12).

[7] Explanatory Memorandum to the proposal (page 12).

[8]'Provider' means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Orrick, Herrington & Sutcliffe LLP | Attorney Advertising

Written by:

Orrick, Herrington & Sutcliffe LLP
Contact
more
less

Orrick, Herrington & Sutcliffe LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide

This website uses cookies to improve user experience, track anonymous site usage, store authorization tokens and permit sharing on social media networks. By continuing to browse this website you accept the use of cookies. Click here to read more about how we use cookies.