[co-authors: Cecilia Canova, Marco Propato]
Following on from discussions at our Digital Breakfasts and in previous articles (see here and here) on the long awaited proposal for an EU regulation laying down harmonized rules on artificial intelligence ("AI Regulation"), we set out below the main features, which are currently still subject to consultation.
Generally speaking, the AI Regulation is the outcome of a long evaluation process that involved many EU institutions, with a view to providing harmonized rules for the development, deployment and use of artificial intelligence systems in the European Union.
What are the key aspects?
The definition of artificial intelligence. The AI Regulation provides a new definition of an artificial intelligence system as a “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (Article 3, no. 1). Unlike previous definitions set out in the well-known White Paper (see also here) and the Ethics guidelines for trustworthy AI, this definition does not explicitly mention data (notwithstanding the clear references to the General Data Protection Regulation – “GDPR” – contained in the AI Regulation).
Risk-based approach. Given the definition, and in line with the approach already proposed by the European Commission in the White Paper, the AI Regulation follows a proportionate risk-based approach.
It differentiates between uses of artificial intelligence that generate:
- Unacceptable risk: Systems that, by way of example only, exploit any of the vulnerabilities of a specific group of people due to their age, physical or mental disability, in order to materially distort their behavior in a manner that is likely to cause physical or psychological harm are of course forbidden.
- High risk: These include systems used for biometric identification and categorization of people, systems intended for use in recruitment or selection, and systems used by law enforcement and judicial authorities to assess the risk of a person offending or reoffending or the risk for potential victims of criminal offences. The classification of high-risk systems depends not only on the function of the system itself, but also on the intended purpose and the way in which the system is used, in line with existing product safety legislation. The list of systems that fall under the definition of high-risk systems is contained in Annex II and Annex III, which may be updated by the European Commission.
- Low risk: These include, for example, chatbots, the use of which must comply with transparency obligations.
- Minimum risk: These include video games that implement artificial intelligence techniques. The use of such systems does not require compliance with the obligations provided for by the AI Regulation.
The main obligations. Given the impact that high-risk systems can have on the rights and freedoms of individuals, the AI Regulation provides for strict obligations, both prior to and after the artificial intelligence system is placed on the market. These apply not only to suppliers or manufacturers, but also to importers, users and any other third parties. Such obligations include, for instance: the adoption of an adequate risk assessment and mitigation systems; the use of high quality datasets to feed the system to minimize risks and discriminatory outcomes; the provision of clear, transparent and adequate information to users; the implementation of a conformity assessment procedure; and the retention of the logs automatically generated by the high-risk AI systems, etc.
AI governance and GDPR. It is not surprising, given that artificial intelligence needs both personal and non-personal data, that the AI Regulation makes several references to the GDPR. Such references include: the impact assessment, which becomes the basis of a broader conformity assessment, the transparency obligations, the administrative fines, the creation of the European Artificial Intelligence Board, which is similar to the European Data Protection Board, as well as the territorial scope.
The European Commission has launched a public consultation, ending on August 6, 2021, to gather input and comments. The outcome will then be presented to the European Parliament and the Council.
This is obviously not the end of the process. Many other initiatives are expected including the draft EU Commission rules to address liability issues related to new technologies, including artificial intelligence systems (last quarter 2021 - first quarter 2022), together with the revision of sectoral safety legislation (second quarter 2021).
As stated by the EU Commission, AI will remain fundamental for any development strategy for the next “Digital Decade”. The current consultation may also have a significant impact beyond the EU borders. Are you interested in contributing to the public debate? Hurry up: there are only a few weeks remaining!