EU Announces Provisional Agreement on the Artificial Intelligence Act

Kilpatrick

On December 8, 2023, the EU announced that Parliament and Council negotiators “had reached a provisional agreement on the Artificial Intelligence Act.” Negotiators had been under pressure to resolve differences to maintain the global lead Europe had assumed on comprehensive regulation of the use of AI. The Act, which has been in preparation since 2018, must still be formally adopted by the European Parliament and Council to become EU law, and Parliament’s Internal Market and Civil Liberties committees will vote on the agreement in a forthcoming meeting. Many of the proposed restrictions are not expected to take effect for at least another 12 to 24 months.

Although the final text has not yet been released, it has been announced that the EU Artificial Intelligence Act will prohibit the following applications of AI:

  • Using sensitive attributes (e.g., political, religious, philosophical beliefs, sexual orientation, and race) for biometric categorization systems;
  • Indiscriminate scraping of facial images from the internet or CCTV footage to build facial recognition databases;
  • Using emotion recognition in educational institutions and employment settings;
  • Social scoring based on personal characteristics or social behavior;
  • AI systems that use dark patterns (i.e., manipulate human behavior to circumvent their free will); and
  • The exploitation of vulnerabilities of people using AI (due to their age, disability, social or economic situation).

Since the Act was first proposed, policymakers have sought to address continued advancements in AI technology such as generative AI, while also balancing the promotion of innovation with the protection of “fundamental rights, democracy, the rule of law and environmental sustainability.” They had to balance the desire to control the perceived risks of AI with the fear that overregulation will make it even harder for European technology companies to catch up with their US counterparts. Leading up to the EU’s announcement, it had been reported that legislators were at a standstill related to certain substantive issues concerning general purpose AI and foundation models. As noted in an EU release from earlier this year, such models, “are trained on a broad set of unlabeled data that can be used for different tasks with minimal fine-tuning.”

The EU announcement on the provisional agreement for the Artificial Intelligence Act addresses key topics including: banned applications, law enforcement exemptions, obligations for high-risk systems, guardrails for general artificial intelligence systems, measures to support innovation and small and medium-sized enterprises (SMEs), and sanctions. The text of the law is expected to follow a risk-based approach to regulating AI, meaning while certain practices are banned (see above), other AI tools that pose higher risks of harm to society require a higher level of scrutiny. Notably, the announcement states that “[f]or AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed.” The Act will likely require human oversight for the implementation of AI systems. All AI systems “used to influence the outcome of elections and voter behavior” are also classified as high-risk. Additionally, a mandatory fundamental rights impact assessment and additional requirements were established as applicable to the insurance and banking sectors.

The EU’s provisional agreement on the Artificial Intelligence Act and the recently issued US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence mark significant developments related to the codification of far-reaching AI regulation that is likely to impact myriad industry sectors as well as education, criminal justice, and public benefit administration.

Many expect the “Brussels Effect” to continue as the Artificial Intelligence Act could likely form the foundation for other global legislation, similar to how the EU’s General Data Protection Regulation (“GDPR”) has set standards for other global privacy laws. As with the GDPR, violating the Artificial Intelligence Act could lead to significant penalties. Under the Act, even though the details of its enforcement are not clear yet, violations of the law could result in fines ranging from 35 million euro or 7% of global turnover to 7.5 million euro or 1.5% of turnover, depending on the violation and size of the company.

Written by:

Kilpatrick
Contact
more
less

Kilpatrick on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide