European Parliament Adopts Artificial Intelligence Act

Ogletree, Deakins, Nash, Smoak & Stewart, P.C.
Contact

Ogletree, Deakins, Nash, Smoak & Stewart, P.C.

On March 13, 2024, European Union policymakers finally passed the long anticipated Artificial Intelligence Act (AI Act), the world’s first comprehensive artificial intelligence (AI) legislation, providing employers with much-needed guidance.

Quick Hits

  • The AI Act’s risk-based approach subjects AI applications to four different levels of restrictions and requirements, including “unacceptable risk,” which are banned; “high risk”; “limited risk”; and “minimal risk.”
  • The AI Act treats the use of AI in the workplace as potentially high-risk.
  • The AI Act is expected to be published soon and go into effect in spring or early summer this year.

While the AI Act does not exclusively regulate employers, it treats the use of AI in the workplace as potentially high-risk, and specifically requires employers to:

  • notify employees and workers’ representatives before implementing “high-risk AI systems,” such as systems that are used for recruiting or other employment-related decision-making purposes;
  • follow “instructions of use” provided by the producers of high-risk AI systems;
  • implement “human oversight” by individuals “who have the necessary competence, training and authority, as well as the necessary support”; and
  • retain records of the AI output, and maintain compliance with other data privacy obligations.

A Risk-Based Approach

  1. Unacceptable Risk applications are banned. They include:
  • the scraping of faces from the internet or security footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • cognitive behavioral manipulation;
  • biometric categorization to infer sensitive data, such as sexual orientation or religious beliefs; and
  • certain cases of predictive policing for individuals.

2. High Risk applications, including the use of AI in employment applications and other aspects of the workplace, are subject to a variety of requirements.

3. Limited Risk applications, such as chatbots, must adhere to transparency obligations.

4. Minimal Risk applications, such as games and spam filters, can be developed and used without restriction.

Hefty Penalties for Violations

Using prohibited AI practices can result in hefty penalties, with fines of up to €35 million, or 7 percent of worldwide annual turnover for the preceding financial year—whichever is higher. Similarly, failure to comply with the AI Act’s data governance and transparency requirements can lead to fines up to €15 million, or 3 percent of worldwide turnover for the preceding financial year. Violation of the AI Act’s other requirements can result in fines of up to €7.5 million or 1 percent of worldwide turnover for the preceding financial year.

The AI Act is expected to be published and go into effect in later spring or early summer of 2024. In the meantime, employers can expect other countries to quickly follow suit with legislation modeled on the AI Act.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Ogletree, Deakins, Nash, Smoak & Stewart, P.C. | Attorney Advertising

Written by:

Ogletree, Deakins, Nash, Smoak & Stewart, P.C.
Contact
more
less

Ogletree, Deakins, Nash, Smoak & Stewart, P.C. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide