European Union Artificial Intelligence Act: An Overview

Benesch
Contact

Benesch

World’s first comprehensive regulation: The European Union (“EU”) Artificial Intelligence Act (“AI Act”) gets final nod from EU Parliament.

Final Assent by EU Parliament

On March 13, 2024, the EU AI Act received its final assent from the EU Parliament with 523 votes in favor, 46 against and 49 abstentions, bringing it one step closer to adoption. Minor linguistic changes are still to be approved by the EU Parliament. Thereafter, the final approved version will be published in the Official Journal of the EU sometime in the month of May. This is a historic moment as the EU AI Act is the world’s first comprehensive legislation regulating Artificial Intelligence (“AI”) systems according to a risk-based approach.

Brief History

EU leaders had requested the Commission in October 2020 to propose ways to increase investments in AI systems and to provide an overarching regulatory framework for the same. The intention behind this request was that EU leaders wanted to strike a balance between fostering innovation and having AI systems that are transparent, safe, and non-discriminatory. In response, a year later, the European Commission proposed an AI Act on April 21, 2021. The European Parliament approved its version of the EU AI Act on June 14, 2023. This was followed by intense negotiations between the European Institutions (European Parliament, the European Council, and the European Commission), and on December 8, 2023, the stakeholders reached a provisional agreement.

Applicability

The AI Act will have broad applicability much like the EU General Data Protection Regulation (“EU GDPR”) thereby having possible ramifications on companies established outside the European Union. Once in effect, the AI Act will to apply to: (a) providers placing AI systems in the Union irrespective of where they are established; (b) deployers of AI systems that have their place of establishment in the EU; (c) providers or deployers located outside the EU but where the output produced by the AI system is going to be used in the EU; (d) importers and distributors of AI systems; (e) authorized representatives of providers of AI systems who are not established in the Union; and (f) affected persons that are located in the Union. One particularly important and intensely negotiated exception, the AI Act will not be applicable to AI systems that are used exclusively for military or defense purposes.

Since the EU AI Act is going to apply to providers and deployers irrespective of the place of establishment, the implementation of this AI Act will have a ripple effect on companies established in the United States (“US”) having operations in the European Union even though the US has no overarching federal legislation at present governing AI systems akin to the EU AI Act.

Definition of AI Systems

In the absence of any other legislation, the AI Act will likely be the ‘global standard’ for regulating AI systems. To this effect, the definition of AI systems in the AI Act is in line with the Organization for Economic Co-operation and Development (“OECD”) Guidelines. The AI Act defines AI systems as: “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This definition of AI systems is well thought-out to ensure that it is not only broad enough to envisage future technological advancements but also ensure that traditional software doing simple automated calculations are not included within the scope of this Act.  

Risk-based Approach

The AI Act will be modeled on a risk-based approach wherein high-risk AI systems will be regulated more extensively than the ones that pose less risk. To this effect, the AI Act has divided AI systems into four categories:

  • Unacceptable Risk: These types of AI systems are deemed a clear threat to the safety and livelihood of humankind and go against the ethos of the EU. Therefore, such AI systems are prohibited by the AI Act. AI systems with unacceptable risk include (a) social scoring, (b) biometric identification systems used to deduce and categorize individuals on the basis of attributes such as race, sex life, sexual orientation, and religious beliefs, (c) AI systems that manipulate human behavior. Even though these AI systems are prohibited, the AI Act carves out a narrow exception for such systems used for law enforcement purposes.
  • High Risk: These types of AI systems are deemed to pose a significant threat to health, safety, fundamental rights and the rule of law. This type includes AI systems that are deployed in (a) critical infrastructure (e.g. transport, education, public utilities), (b) essential public services (e.g. credit scoring), (c) law enforcement that might impact a person’s fundamental right; (d) administration of justice, (e) employment/ recruitment, and (f) remote biometric identification systems. These AI systems will be required to comply with extensive obligations before they are available in the public market, such as, adequate risk assessment, appropriate human oversight, and implementing mitigation systems.  
  • Limited Risk: These types of AI systems are deemed not to pose any serious threat and the primary risk associated with such AI systems is due to lack of transparency. The AI Act has introduced certain transparency obligations to ensure that all human users are well-informed that they are interacting with an AI system. An example of AI systems with limited risk is chatbots. As long as human users are made aware that they are interacting with a limited risk AI system, such system is not deemed to pose any significant threat under the AI Act.
  • Minimal Risk: These types of AI systems are deemed to have no real associated risk and can be deployed without any restrictions. Examples of minimal-risk AI systems include AI-enabled video games and inventory-management systems.

General Purpose AI (GPAI) Systems

As the name suggests, GPAI systems are those AI solutions that can be used for a variety of different purposes. The AI Act will not apply to GPAI systems that are used exclusively for the purpose of scientific research and development. However, GPAI systems used for other purposes will be regulated by the AI Act with a focus on maintaining transparency. For instance, the provider of a GPAI system will be required to make technical documentation available to the enforcement authorities for training and testing purposes. Further, the GPAI systems must be modeled in a way to respect the national copyright laws of the member states.

If a GPAI system’s computational power is greater than 10^25 floating point operations (FLOPs), then such GPAI model is presumed to have high impact capabilities and will be subject to additional regulations. Further, the EU Commission intends to release a periodic list of such GPAI models with systemic risk to ensure compliance.   

Fines

Much like the EU GDPR, the EU has proposed stringent fines to ensure compliance with the AI Act. The majority of the violations under the legislation will be subject to administrative fines of up to 15 million Euros or 3% of the violator’s total worldwide turnover for the preceding financial year (“Total Turnover”), whichever is higher. However, violation of Article 5 (prohibited AI practices) will be subject to administrative fines of up to 35 million Euros or 7% of the violator’s Total Turnover, whichever is higher. Further, the supply of incorrect, incomplete, or misleading information to the notified bodies or national regulators in response to a request will be subject administrative fines of up to 7.5 million Euros or 1% of the violator’s Total Turnover, whichever is greater.

National Regulators

EU member states have been given one year to nominate the relevant national authorities that will regulate the AI Act in each such member state. At this stage, it is premature to speculate who these national regulators will be; however, certain member states already have some competent authority dealing with AI systems. For instance, in 2023, Spain became the first EU country to establish an Agency for the Supervision of Artificial Intelligence (“AESAI”). Similarly, the Department of Enterprise, Trade and Employment is likely going to be the national regulator for Ireland. However, we will know about the final decision on national regulators only when they officially notify these appointments to the EU Commission.

Important Dates

The AI Act will enter into force 20 days after it is published in the Official Journal of EU (which, as noted above, is anticipated to occur in May). Organizations will have 2 years from the date it entered into force for it to be fully applicable. However, the legislators have planned for a phased-implementation and there are certain exceptions to the 2-year timeline. Some of the major exceptions are as follows:

  1. Prohibitions on unacceptable risk AI will be effective 6 months after entry into force.
  2. Obligations on providers of general-purpose AI and appointment of member state competent authorities will be effective 12 months after entry into force.
  3. Post-market monitoring system will be implemented 18 months after entry into force.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Benesch | Attorney Advertising

Written by:

Benesch
Contact
more
less

Benesch on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide