[co-author: David O’Donovan]
Assessment List for Trustworthy Artificial Intelligence
On July 17, 2020, the European High-Level Expert Group on Artificial Intelligence (“AI HLEG”) presented its final Assessment List for Trustworthy Artificial Intelligence (“ALTAI”), to help companies identify AI-related risks, minimize them and determine what active measures to take, through self-evaluation.
In December 2018, the European Commission (“EC”) announced in a communication its vision for artificial intelligence (“AI”) which supports “ethical, secure, and cutting-edge AI made in Europe”. To implement this vision, the EC created the AI HLEG, a group of 52 experts in the field of AI, to draft guidelines on AI ethics and policy and investment recommendations.
On April 8, 2019, the AI HLEG published its Ethics Guidelines for Trustworthy AI (“Guidelines”), which implemented more than 500 comments from stakeholders received through an open consultation procedure. While the Guidelines were not meant as a legally binding document, they aim to establish a framework of guiding principles to assist developers and deployers in achieving “Trustworthy AI”, i.e., AI that is lawful, ethical and robust. In particular, the Guidelines identify four “Ethical Imperatives” including privacy considerations derived from EU fundamental rights, which are crucial to ensure that AI systems are developed, deployed and used in a trustworthy manner: respect for human autonomy, prevention of harm, fairness and explicability. Fortunately, these principles do not stay at an abstract level but include specific, practical questions to consider when building AI technology, for instance: does the AI system interact with decisions by human end-users, e.g., does it recommend actions or decisions or present options to the user? If the AI system supplements part of human work, have the task allocations and interactions between the AI system and humans been considered and evaluated to allow for appropriate human oversight and control?
Seven Requirements for Trustworthy AI
The concept of Trustworthy AI, as set out in the Guidelines, is premised on seven key requirements, which are intended to apply continuously throughout an AI ’system’s life cycle:
(1) Human Oversight. AI systems should enable humans to make their own informed decisions and foster fundamental rights, and not decrease, limit or misguide human autonomy by concealing the AI origin of certain information or decisions. This requirement is mainly aimed at AI systems that guide, influence or support humans in decision-making processes, for example, algorithmic decision support systems or risk analysis/prediction systems. To achieve this goal, AI systems will require human oversight mechanisms (human-in-the-loop, human-on-the-loop and human-in-command), to decide when and how to use, or cease to use, the AI system in any particular situation.
(2) Technical Robustness and Safety. Trustworthy AI requires algorithms to be secure and sufficiently robust to deal with errors or inconsistencies during all phases of AI systems. This includes ensuring there is a failsafe fall-back plan to address AI systems errors, as well as ensuring systems are accurate, reliable and reproducible.
(3) Privacy and Data Governance. Individuals should have full control over their own data. AI systems should incorporate protections regarding privacy, as well as ensure the quality and integrity of the data used.
(4) Transparency. The processes of AI development should be documented to allow AI systems’ outcomes to be traced. Companies should be able to explain the AI system’s technical processes and the reasoning behind the decisions or predictions that the AI system makes. Consumers need to be aware that they are interacting with an AI system and must be informed of the system’s capabilities and limitations.
(5) Diversity, Non-discrimination and Fairness. AI systems should be inclusive, available and addressed to all users, regardless of age, gender, abilities or other characteristics. Unfair bias should be avoided, as it could have multiple negative implications including the marginalization of vulnerable groups.
(6) Societal and Environmental Well-being. AI systems should benefit all human beings and must be sustainable and environmentally friendly. The AI system’s impact on parts of the economy as well as the society at large should also be considered.
(7) Accountability. Mechanisms should be put in place to ensure responsibility and accountability for the development, deployment and use of AI systems, especially in the occurrence of negative impact on consumers. AI systems should be available for evaluation to auditors and provide adequate and accessible redress procedures to users.
Trustworthy AI Assessment List
The Guidelines set out an Assessment List, intended to define the key requirements of Trustworthy AI. Following a pilot process in 2019, the final version of the Assessment List was published on July 17, 2020. The ALTAI supports companies to identify the risks of their AI systems and implement appropriate measures to mitigate those risks through the implementation of the seven key requirements. While the ALTAI is voluntary, it is an important step on the path to formal regulation of AI, as it enables companies to signal compliance with it, and thus foster consumer trust. The AI HLEG noted that the Assessment List should be used in a flexible manner, and companies may choose to focus on some elements more than others, depending on the particular industry or sector in which they operate.
The AI HLEG recommends that organizations perform a fundamental rights impact assessment (“FRIA”) to determine whether their AI systems respect the EU Charter of Fundamental Rights and the European Convention on Human Rights. The FRIA should include questions such as:
Following the FRIA performance, organisations can then proceed to carry out their self-assessment for Trustworthy AI. The assessment consists of a set of questions for each of the seven requirements for Trustworthy AI. A non-exhaustive list of key questions is set out in the ALTAI. Such questions include:
European Commission Unveils Plans for AI Regulation
Building upon all of the above-mentioned guidance, as well as the recent White Paper on AI, the European Commission finally unveiled its inception impact assessment for AI legislation on July 23, 2020. While the completed impact assessment is not expected until December 2020, this initial roadmap defines the scope and goals of the ongoing impact assessment study. The European Commission currently welcomes feedback on this roadmap to AI legislation through September 10, 2020.
The study, covering the EU-wide digital market, would examine different legislative options for AI regulation, ranging from no action or only soft law guidelines, to the implementation of voluntary industry-led compliance schemes and codes of conduct, all the way to full-scale AI regulation. Many factors are considered as part of the study, namely how different types of AI regulation would impact small- and medium-sized enterprises (“SMEs”) as compared to well-established large companies, potential competitive advantages AI regulation may bring to the EU digital market by fostering consumer trust, as well as resulting legal fragmentation and uncertainty in the absence of EU-wide AI regulation, if EU Member States were left to regulate AI individually.
The goal of this impact assessment is to determine the best legislative path to implement the EU’s approach to AI: fostering consumer trust in AI technologies based on an appropriate legal and ethical framework with a particular focus on the EU’s respect for fundamental rights at its core. Several key concerns about AI will be addressed:
Such concerns are likely to be the main focus of AI regulation, expected in 2021, following the much-anticipated findings of the impact assessment study.