The Latest on the EU’s Proposed Artificial Intelligence Act

Perkins Coie
Contact

Perkins Coie

The fast-developing innovations brought by generative artificial intelligence (AI) are hastening calls from industry and government to consider new regulatory frameworks. The EU was in the process of implementing its AI Act, first proposed on April 21, 2021 (as we previously summarized), before generative AI chatbots were widely released. While the EU’s AI Act was touted as the world’s first and most comprehensive regulatory framework, some observed that it risked being outdated before it was set to become legally effective. Since the initial proposal, the European Commission (the Commission), the Council of the European Union, and the European Parliament have been working on modifying and refining the initial draft, including most recently to consider the implications of generative AI.

The Commission’s initial AI Act draft proposed a risk-based regulatory approach, transparency requirements, and measures to protect against bias in AI systems. The risk-based approach would impose stricter requirements and oversight for high-risk AI systems (such as those used in healthcare), including conformity assessments and data quality and governance. Other uses of AI would be prohibited entirely, such as the use of subliminal techniques to manipulate user behavior. Entities that fail to comply with the regulation could face fines of up to 6% of their global revenue.

This Update provides a fresh look at the AI Act’s legislative status and its substantive evolution before it becomes legally effective.

The European Parliament’s Proposal

In a vote held on April 27, 2023, the Members of the European Parliament (MEPs) reached a provisional political agreement on an amended version of the European Commission’s draft. The text (notably the recitals) might still be subject to minor additions or technical amendments. Nevertheless, the final version of the Parliament’s proposal is expected by mid-June following a key committee vote scheduled on May 11, 2023, followed by ratification in plenary.

The initial draft of the MEPs’ proposal, published last April, received more than 3,000 proposed amendments. While the Parliament was deliberating, the rapid technological developments and increasing availability of generative AI technologies disrupted debates around its proposed version. Addressing generative AI became one of the key areas of focus for the Parliament during its lengthy negotiation process. As discussed below, while the Council proposed to address this topic in a future implementing act, the Parliament was not keen to wait for such future action. Instead, the latest political agreement aims to tackle generative AI head-on.

General Purpose AI and Foundation Models

To address generative AI, the Parliament introduced a further distinction between “general purpose AI” (as proposed by the Council) and “foundation models” such as GPT-4 and Stable Diffusion. The former includes AI systems that can be used in and adapted to a wide range of applications for which the systems were not intentionally or specifically designed. The latter, which is subject to a stricter regulatory regime in the Parliament’s proposal, covers AI systems that are trained on broad data at scale, are designed for generality of output, and can be adapted to a wide range of distinctive tasks.

In the Parliament’s proposal, foundation models are subject to specific requirements. For instance, the provisional agreement provides that before a model is made available, testing and analysis (including by independent experts) need to be conducted to identify and mitigate reasonably foreseeable risks to health, safety, fundamental rights, the environment, democracy, and the rule of law. These models also require appropriate levels of performance, predictability, interpretability, correctability, safety, and cybersecurity throughout their lifecycles, including data governance measures to examine possible bias and appropriate mitigation measures.

Generative AI

In the Parliament’s amended version of the AI Act draft, a foundation model that is “used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video” qualifies as “generative AI” and is subject to specific requirements, in addition to the transparency obligations applicable to other foundation models. As such, the providers of foundation models used in generative AI systems shall, among other things, design and develop the foundation model in accordance with EU law and fundamental rights, including freedom of expression. They also provide a public summary when the AI system is trained with data protected under copyright laws. Overall, these requirements will apply to foundation models across the entire AI value chain, regardless of their distribution channels, development methods, or type of training data. In particular, the Parliament’s amended draft requires the providers of foundation models to assist the downstream providers of generative AI systems in putting in place adequate safeguards.

Beyond the additions for foundation models and generative AI systems, the Parliament also debated many other modifications to the Commission’s original draft. For example, MEPs agreed that certain technologies, such as real-time facial emotion recognition (including biometric identification and biometric categorization systems), should be banned entirely. MEPs also extensively debated exactly what uses of AI technologies should be considered “high-risk” and what the obligations for high-risk systems should be. Although the list of “high-risk” areas and use cases was expanded, the Parliament’s version appears to be more flexible in that it provides the AI provider the option to notify the national supervisory authority when the provider—based on its own assessment—concludes that the AI system does not pose a “significant risk of harm” to the health, safety, or fundamental rights.

The Council of the EU’s Proposal

The Council of the European Union approved its revised version of the AI Act on December 6, 2022. The Council’s draft is largely similar to the original draft proposed by the Commission in April 2021, but it includes some notable changes.

Scope of the AI Act

The Council’s draft expanded the scope of the AI Act by adding a new section to address “general purpose AI,” which was not addressed by the Commission’s original draft. A “general purpose AI system” is defined as any AI system that “is intended by the provider to perform generally applicable functions,” which may be used in a “plurality of contexts and be integrated into a plurality of other AI systems.” The Council’s draft clarifies that certain requirements for high-risk AI systems under the AI Act may also apply to general purpose AI systems that are integrated into a system that becomes high-risk, with the exact application to be described in a future implementing act by the Commission. While this addition does not directly target generative AI, the definition of a “general purpose AI system” is likely broad enough to capture some generative AI tools.

The Council also restricted the definition of “artificial intelligence systems” that are covered by the AI Act. The definition in the European Commission’s original draft of the AI Act was broad enough to cover many types of software beyond what is commonly considered “AI.” In its revised draft, the Council modified this definition to better distinguish “simpler software systems” from AI. The Council’s new definition is narrowed to cover systems “designed to operate with elements of autonomy using machine learning and/or logic- and knowledge-based approaches and produces system-generated output.” While narrower, this new definition is still arguably broad enough to cover some “simpler software systems” and thus could create uncertainty about the AI Act’s scope.

The Council’s draft also includes some explicit new exclusions from its scope, including national security purposes and “any research and development activity regarding AI systems.” Notably, this “any research and development” exclusion seems broad enough to cover research and development conducted by commercial entities, not just academic institutions and nonprofit entities.

Prohibited and High-Risk AI Systems

The Council’s draft expands some of the prohibited uses for AI systems. While the Commission’s original draft for the AI Act only prohibited government entities from developing “social credit” systems, the Council’s draft expands the prohibition to cover private actors as well. Additionally, the Council’s draft adds vulnerabilities due to their social or economic situations to the types of vulnerabilities that AI systems are forbidden from exploiting.

As for “high-risk” AI systems, the Council modified the AI Act’s compliance obligations to be “more technically feasible and less burdensome for stakeholders.” For example, whereas the Commission’s draft requires that data sets used to train high-risk AI systems be “free of errors and complete,” the Council’s draft adds the qualifier “to the best extent possible.” The Council’s draft also attempts to account for the complexity of AI systems’ value chain by “clarifying the allocation of responsibilities and roles of the various actors in those chains.”

Supporting Innovation

In apparent response to observers who argued the AI Act could stifle innovation in a fast-developing area, the Council revised parts of the AI Act to create a more “innovation-friendly” legal framework. In particular, the Council’s draft clarifies that the regulatory “sandboxes” permitted under the AI Act would allow for testing of AI systems “in real world conditions” under the supervision of “national competent authorities.” Additionally, the Council added new provisions that allow for testing of unsupervised AI systems under certain circumstances.

Takeaways and Next Steps

If adopted, the AI Act would be the most comprehensive and ambitious effort yet at establishing a regulatory regime for AI technologies. While the AI Act would be directed at development and use of AI technologies in Europe (including the intended use of the output produced by the system), the AI Act could have a significant impact on the global development and use of artificial intelligence. Much like the General Data Protection Regulation (GDPR) did for privacy regulation, the AI Act could set a global standard followed by other countries and regions. In fact, the draft AI Act appears to have had an impact already. The Brazilian government is considering adopting its own AI law that is similar to the Commission’s original draft for the AI Act. Meanwhile, policymakers in the United States, including Sens. Chuck Schumer and Mark Warner, have shown increasing interest in developing a regulatory framework to regulate AI technologies.

Once the Parliament’s proposal (also known as the negotiating mandate) is finally adopted in plenary session, the Commission, Council, and Parliament will have “trilogue” meetings to negotiate and reconcile the three different versions of the AI Act. At this pace, the final AI Act could be adopted before the next European elections scheduled in May 2024.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Perkins Coie | Attorney Advertising

Written by:

Perkins Coie
Contact
more
less

Perkins Coie on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide