European AI Act Approved by the European Parliament

Barnea Jaffa Lande & Co.
Contact

On March 13, 2024, the European Parliament approved the AI Act regulating the use of artificial intelligence in the Union. This is groundbreaking legislation and the first of its kind in the world, aimed at creating effective protection for the rights of Union residents against the harmful use of advanced machine learning (ML) and artificial intelligence (AI) systems. The act emphasizes the inherent risks involved in using AI systems, prohibits the use of certain systems, and requires the implementation of stringent provisions for high-risk systems. Non-compliance with the act can lead to fines of up to 7% of a company’s annual global turnover.

AI Act Implementation Timeline

The provisions prohibiting the use of certain AI systems are expected to enter into force six months after approval.

After one year, the provisions related to general-purpose AI systems will come into force, member states will have to appoint regulatory authorities, and a review of the list of prohibited uses of AI will come into force.

After 18 months, rules on a post-market review of high-risk AI systems will have to be published.

After two years, most of the provisions of the act will come into force, including the application of the act to most high-risk AI systems, publication of rules on penalties and fines in the member states, the establishment of regulatory “sandboxes” in that will allow technological development with lighter restrictions, and an additional review of the list of high-risk AI systems.

After three years, the act will apply to high-risk AI systems that are safety systems or subject to additional regulation.

In 2030, the act will apply to large-scale IT systems established by European Union law, in the context of freedom, security, and justice.

What to do to comply with the AI Act?

To understand if the AI Act applies to your business activities and to prepare for its enforcement, you should examine the following questions:

1. Your role according to the AI Act

The act imposes various obligations on different actors in relation to AI. Specifically, a distinction is made between deployers of AI systems and other entities involved in the development, production, and distribution of AI systems and products that incorporate them.

Deployers are defined as natural or legal persons and other entities who use AI systems for purposes other than personal non-professional ones. That is, the legislation does not apply to individuals’ private use of these systems.

Providers are those who develop AI systems and place them on the market or into service, whether for payment or free of charge. Therefore, the act also applies to those who develop and make available AI systems as a non-profit or for the common good.

The act includes several additional definitions relating to the distribution chain and marketing of AI systems.

2. Will the act apply to your business?

Like other EU legislation, the AI Act applies not only to companies based in the European Union but also to those providing services within it. Specifically, the act applies to: providers placing AI systems on the European market or putting them into service in the EU; deployers of AI systems who are established with or located in the EU; providers and deployers of AI systems located outside the EU when the systems’ output is used within the EU, the systems are made available for use within the EU, or they are used independently or as part of a product within the Union, and; representatives of companies outside the EU representing artificial intelligence providers.

The scope of the act is broad, aiming to cover all systems that could affect EU residents. Therefore, companies offering AI systems or using such systems in their regular business activities, and have European customers, should examine the potential applicability of the act to their activity.

3. Which systems are prohibited for use according to AI Act?

The act uniquely includes a list of AI systems prohibited for use in the European Union, that can be updated from time to time. As of today, the list includes the following:

  • Systems using subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s behavior by appreciably impairing the person’s ability to make an informed decision, in a manner that causes or is likely to cause significant harm.- Systems exploiting vulnerabilities of a person or group related to age, disability, or specific social or economic situation, to influence their behavior in a way that causes or could cause significant harm.
  • Systems using biometric categorization of persons to deduce their race, political opinions, trade union membership, beliefs, sex life, or sexual orientation
  • Systems evaluating people based on social behavior or personality characteristics and creating a social score leading to discriminatory treatment of those people.
    Real-time biometric identification systems in public spaces by law enforcement authorities unless certain exceptions apply.
  • Systems predicting criminal behavior based on profiling or personality traits, unless based on objective information of involvement in a crime.
  • Systems creating facial recognition databases based on images from the internet or security cameras.
  • Systems inferring a person’s emotional state in the context of education or employment unless used for safety or medical purposes.

It’s important for companies operating such systems to thoroughly examine their practices and the act’s applicability to their uses of AI. It should be noted that the prohibition on using these systems in the EU will comes into effect six months after approval, during October 2024.

4. Is the system a high-risk system according to the AI act?

In addition to the systems prohibited by the act, it imposes obligations on providers and deployers of high-risk AI systems.
High-risk systems are those that are:

  • Part of a product or safety component in a product covered by EU harmonisations legislation and requiring prior approval by the Union (examples include transportation vehicles, explosives, elevators, medical products, etc.).
  • Systems in certain areas such as the justice and law enforcement, biometric applications, critical infrastructures, education, employment and employee management, essential services and governmental benefits, migration, and border control.

5. What obligations apply to operators of high-risk systems?

According to the AI Act, High-risk systems require the implementation of special requirements, including:

    • Implementing a system for identifying and managing risks arising from the AI system.
    • Conducting a thorough review of the data on which the system is trained, in accordance with the criteria set out in the act.
    • Documenting the technical characteristics of the system before it is put into use.
    • Keeping logs of the system’s activity in a way that allows for identification of cases where the system poses a risk and facilitating post-market monitoring.
    • Maintaining transparency and disclosure requirements to deployers of the system – including ensuring that deployers can understand the output, and by providing operating instructions that include reference to the purposes of use, the level of accuracy and possible errors.
    • Ensuring the possibility of effective human oversight.
    • Implementing data security measures and other tools that will ensure the reliability and resilience of the system.

Providers, importers, and distributors of high-risk systems are also subject to additional obligations, such as quality assurance, registration, notification, and audit obligations.

Deployers of AI systems for purposes other than private non-professional, are required to carry out a risk assessment on fundamental rights before using the system. In addition, they are obligated to ensure adequate training for employees, have obligations regarding the information used as input to the AI system, have to operate in accordance with the instructions for use, monitoring the activity of the systems and keeping records, and more.

6. Does the AI system require the implementation of special transparency obligations?

Even if a system is not a high-risk system, it may be subject to special transparency obligations.

These systems include:

    • Systems directly interacting with people (e.g., chatbots).
    • Generative AI systems.
    • Emotion recognition systems.
    • Biometric classification systems.
    • Used for creating deep fakes.

The entry into force of the EU AI Act marks a significant milestone in the regulation of new technology worldwide. The act is expected to have substantial implications for how key technology companies develop, offer, and use AI tools and systems. With the establishment of authorities to oversee the implementation of the act, we will understand the enforcement boundaries and the degree of freedom technology companies will enjoy in the European Union.

We recommend starting preparations for the implementation of the act now to complete the necessary processes before enforcement begins. Furthermore, the integration of artificial intelligence tools into a business may require adjustments and additional steps, which we have elaborated on in detail in our Legal Guide for Implementing AI Tools in Organizations.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Barnea Jaffa Lande & Co. | Attorney Advertising

Written by:

Barnea Jaffa Lande & Co.
Contact
more
less

Barnea Jaffa Lande & Co. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide