Understanding the EU AI Act: A Comprehensive Overview

Osano
Contact

The European Union (EU) has taken a significant step towards regulating artificial intelligence (AI) with the passage of the EU AI Act. This comprehensive legislation establishes a framework that ensures AI is developed and used in a manner that is safe, ethical, and respects fundamental rights.  

In this article, we will delve into the genesis of the EU AI Act, dissect its structure and key provisions, explore its scope and impact on AI development, and discuss the importance of compliance with this groundbreaking regulation. 

Why the EU AI Act Matters for Data Privacy 

As we have and will describe in this article, there are plenty of reasons to want AI regulation. AI models can make biased decisions, be used maliciously to harm people and infrastructure, and more. But out of all of these potential domains where AI could do harm, data privacy rights may be one of the most impacted—but least understood. 

For one, generative AI systems must be trained on vast sets of data, and the odds are good that some of that data will be an individual’s personal information. If they aren’t made aware of the collection of their information for the purposes of training an AI, then that data collection could be in violation of laws like the GDPR and CPRA. That’s true even if their personal data is scrubbed from the AI model’s output—which isn’t always the case. Any information used to train the model can potentially be regurgitated in its output unless certain steps are taken first (such as using privacy-enhancing technologies to reduce the usage of personal information). And often, user interactions are used to train AI models as well. In such cases, users need to be informed and their consent needs to be secured first. 

Beyond exposing personal information, AI can also be used to thwart cybersecurity measures used to protect personal information. Experts have raised the possibility that novel computational techniques could break encryption, in which case virtually every kind of secret and private piece of digital information will be at risk. But even if this technological leap isn’t enabled by AI, AI is already quite capable of undermining cybersecurity measures in other ways. 

Lastly, AI can be used to identify individuals and collect their personal information. Consider the case of Clearview AI, which came under fire for collecting individual’s biometric data without their consent. Clearview AI scraped the internet for individual photos and built profiles on those individuals, offering their tool to law enforcement professionals. However, the tool has been used by Clearview AI’s investors for private purposes, and the company suffered a data breach that exposed its database of personal information. 

From a data privacy perspective, it’s clear that AI-specific regulation is important. Data privacy regulations are equipped to handle infractions associated with the unlawful collection and processing of personal information for AI training, but AI is capable of violating personal privacy in other ways. As a result, it needs bespoke, purpose-built regulation. 

The Scope of the EU AI Act: The GDPR of AI? 

The EU AI Act has a broad scope that encompasses various industries and geographical boundaries. 

Although the EU AI Act primarily targets AI systems developed and deployed within the European Union, its impact extends beyond Europe's borders, just like the GDPR. The Act recognizes the global nature of AI technology and aims to establish a harmonized framework for AI regulation. 

Organizations outside the EU that offer AI products or services to users in the EU will also need to adhere to the Act. This extraterritorial reach ensures that AI systems used by EU residents, regardless of their geographical location, meet the same standards. 

Dissecting the EU AI Act: Major Provisions and Requirements 

Since AI is such a broad category, the AI Act takes a risk-based approach to regulation. It defines four tiers of risk, each with different levels of obligations: 

  1. Minimal or no risk 
  2. Limited risk 
  3. High risk 
  4. Unacceptable risk 

Developers are supposed to determine their risk category themselves based on guidance from the Act. This is a crucial step, as not only does it determine what sort of requirements AI providers need to meet, but miscategorizing an AI system counts as a violation of the law. 

Minimal or No Risk Systems 

These are systems like the use of AI for video games or spam filters. They pose essentially no risk to society at large and are subject to no requirements as a result. 

Limited-Risk Systems 

These systems must meet specific transparency obligations. Chatbots, for example, are a great example of a limited-risk system—when interacting with a chatbot, an individual must be informed that they are engaging with a machine first and be given the opportunity to speak with a human instead. 

High-Risk Systems 

Providers of high-risk systems will need to meet additional obligations, including conducting fundamental rights impact assessments, adhering to data governance and record-keeping practices, providing detailed documentation, and meeting high standards for accuracy and cybersecurity. Furthermore, these systems must be registered in an EU-wide public database. 

Any incidents associated with high-risk AI systems must be reported as well. Incidents could include damage to a person’s health or their death, serious damage to property or the environment, disruption to infrastructure, or the violation of fundamental EU rights. 

The Act identifies two categories of high-risk systems: those that are safety components subject to existing safety standards and systems used for a specific, sensitive purpose. AI-powered medical systems might fall in the first category, while the second category covers eight broad areas: 

  • Biometrics 
  • Infrastructure 
  • Education and vocational training 
  • Employment, workers management, and access to self-employment 
  • Access to essential services 
  • Law enforcement 
  • Border control management 
  • Administration of justice and democratic processes 

Unacceptable Risk Systems 

These systems are deemed too risky to even be permitted at all and are banned outright. This includes systems designed for: 

  • The manipulation of human behavior to circumvent free will 
  • The exploitation of any of the vulnerabilities of a specific group of people 
  • Untargeted scraping of facial data for the creation of facial recognition databases 
  • Social scoring 
  • Emotion recognition in workplaces or educational settings 
  • Biometric categorization systems based on certain characteristics, such as political, religious, and philosophical beliefs; sexual orientation; and race. 

The use of biometric identification was a particular sticking point during debates over the AI Act. Many of these systems are of high value to law enforcement agencies, but it’s still possible that governments and law enforcement agencies could abuse them.  

Ultimately, law enforcement is exempt from the ban on biometric identification under certain circumstances. Prior judicial authorization is required; the target must be suspected of a specific, serious crime such as terrorism and kidnapping; and the use of biometric identification systems must be limited in time and location. 

General Purpose AI 

General Purpose AI (GPAI), or foundation models, are a particularly difficult class of AI models to regulate. Here’s how the European Parliament described foundation models: 

Foundation models are a recent development, in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained. 

ChatGPT, for example, is a type of foundation model. Since these models can be used for a wide range of purposes that include both the harmful and the harmless, there needs to be a nuanced means of regulating them. 

Currently, the AI Act provides two categories for foundation models: high-impact and low-impact systems. High-impact systems will be required to adhere to additional requirements, including: 

  • Model evaluations. 
  • Assessing and mitigating systemic risks.  
  • Adversarial testing.  
  • Reporting incidents to the European Commission. 
  • Additional cybersecurity measures. 
  • Reporting on energy efficiency. 
  • And more. 

All foundation models, regardless of their risk profile, are also required to provide technical documentation, summarize training content, and meet other transparency requirements. While it’s not the only criterion for GPAI to be categorized as high risk, the AI Act presumes that models trained using a significant amount of computing power (specifically, 10^25 floating point operations, or FLOPs) will have significant enough capabilities and underlying data to represent a system risk. 

Compliance with the EU AI Act 

Complying with the EU AI Act is of paramount importance for organizations involved in AI development and usage. 

Steps Towards Compliance 

Organizations must take several steps to ensure compliance with the EU AI Act. This includes conducting AI system assessments, implementing necessary safeguards, establishing effective governance mechanisms, and adhering to transparency and disclosure requirements. It is essential to develop a comprehensive compliance strategy that aligns with the Act's provisions. 

Penalties for Non-Compliance 

The EU AI Act includes provisions for penalties in case of non-compliance. Organizations failing to meet the Act's requirements may face significant fines and reputational damage. Specifically, businesses that violate the AI Act are subject to fines ranging from €7.5 million (~$8.23 million), or 1.5% of turnover, to €35 million (~$38.43 million), or 7% of global turnover. 

As the EU AI Act comes into effect, it is crucial for all stakeholders involved in AI development and usage to understand its regulations and implications fully. By embracing responsible AI practices and complying with the Act's requirements, organizations can contribute to the development of AI technologies that benefit society while safeguarding fundamental rights and values. 

Frequently Asked Questions 

When Does the EU AI Act Take Effect? 

The exact date is still to be determined, but the act is expected to take effect in 2026. However, compliance will likely be complicated, and businesses are urged to begin their efforts as soon as possible. 

Who Is Subject to the EU AI Act? 

Any entity that offers AI services to any market in the EU will need to comply, regardless of whether they are based in the EU or another region. Furthermore, AI providers within the EU will be subject to the act. Lastly, if any outputs of an AI system are used in the EU, the AI provider is subject to the act, even if the provider and primary user are based outside of the EU.  

What are the Penalties for Violating the EU AI Act? 

Businesses that violate the AI Act are subject to fines ranging from €7.5 million (~$8.23 million), or 1.5% of turnover, to €35 million (~$38.43 million), or 7% of global turnover. 

Written by:

Osano
Contact
more
less

Osano on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide