Europe Remains At The Forefront of Digital Regulation

Cranfill Sumner LLP
Contact

Cranfill Sumner LLP

On March 12, 2024, the European Parliament passed the EU AI Act. The European Parliament and commentators are calling the EU AI Act “world-leading” Artificial Intelligence (AI) regulation. What exactly does the EU AI Act aim to do and what will be the effects on US-based companies?

The EU AI Act characterizes different types of AI into risk pools. AI tools will be categorized into three broad categories – minimal, high, and unacceptable risk. The EU views the minimal risk category as encompassing: safe, transparent, traceable, non-discriminatory, environmentally friendly, and directly overseen by humans[1]. Presently, there do not appear to be any AI tools that the EU deems as a minimal risk.

High-risk AI tools fall broadly into two subcategories: tools that fall under the EU’s product safety legislation and tools that affect human rights or critical infrastructure. The EU AI Act seeks to empower the European Commission to establish an office to evaluate new, “high-risk” AI tools before coming to market and regulate them periodically while in the market.

Finally, unacceptable risk AI tools will be banned. The EU specifically cites the following AI tools that will be banned in the EU:

  1. Cognitive behavioral manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behavior in children;
  2. Social scoring: classifying people based on behavior, socioeconomic status, or personal characteristics;
  3. Biometric identification and categorization of people; and
  4. Real-time and remote biometric identification systems, such as facial recognition.

Law enforcement use of facial recognition technology will be allowed only under the supervisor of a Member Country’s appropriate court.

Generative AI tools, such as OpenAI’s ChatGPT and Google’s Gemini, will be required to disclose certain portions of their algorithm and what data the tools were trained on to European regulators. AI tools capable of voice, image, or video manipulation, such as DALL-E, will be required to label their content as “artificially manipulated.”

The EU AI Act will not be officially enacted until ratified by Member States. Such ratification will likely occur in 2024.

United States-based companies that create or use AI tools will not be directly affected unless they derive revenue from within the EU. When ratification of the EU AI Act is complete, US companies doing business within the EU will need to immediately comply. Much like the EU’s data privacy scheme, GDPR, the EU AI Act comes with steep fines for non-compliance. The EU AI Act permits the European Commission to fine non-compliant companies seven percent of their global revenue, or up to $38 million, per violation. US companies should understand and take steps to comply with the EU AI Act in a similar way to GDPR.


[1] https://www.europarl.europa.eu/topics/en/article/20201015STO89417/ai-rules-what-the-european-parliament-wants.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Cranfill Sumner LLP | Attorney Advertising

Written by:

Cranfill Sumner LLP
Contact
more
less

Cranfill Sumner LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide