Artificial intelligence (AI) remains one of the main features of most European countries’ strategies, even during these times of the COVID-19 emergency. AI can in fact not only improve health care systems but also be a fundamental tool to analyze data to fight and prevent pandemics.
While there is little doubt about the benefits that that can be drawn, there are also increasing concerns about how to effectively address the risks associated with the usage of AI systems. Such concerns include, among others, data privacy risks – AI may easily be used to de-anonymize individuals’ data etc… (see this previous bite on this point) ̶ and also potential breaches of other fundamental rights, including freedom of expression, non-discrimination, human dignity, etc.
There has been a demand for a common approach to address such concerns, in order to give citizens and corporations enough trust in using (and investing in) AI systems, while also avoiding the market fragmentation that would limit the scale of development throughout Europe.
With this in mind, the European Commission recently published its White Paper on Artificial Intelligence, which is aligned with the key principles set out in the Guidelines on Trustworthy AI published by the EU High-Level Expert Group, namely human agency and oversight, technical robustness and safety, privacy and data governance, transparency and accountability, diversity, non-discrimination and fairness, societal and environmental wellbeing.
In addition to some improvements to the liability regime (such improvements are separately addressed in our TMT Bites), the EU Commission proposes to opt for a risk-based approach, to make proportional regulatory intervention in order to address mainly “high-risk” AI applications. Such high risks are identified where both the relevant sector (e.g. health care) and the intended use involve significant risks.
According to the EU Commission, AI regulations should be based on the following main requirements:
Other requirements may be set for other specific systems, including remote biometric identification, which allows identification at a distance and in a public space of individuals through a set of biometric identifiers (e.g. fingerprints, facial image, etc.) which are compared to other data stored in database(s). Additional requirements may be set, whatever sector is involved, in order to ensure that any such processing is justified, proportionate and subject to adequate safeguards.
The Commission further highlighted that, in order to make future regulations effective, there should be a level playing field, and accordingly any such requirement should be applied to all those that provide AI products or services in the EU, thus including non-EU companies.
The detailed implementation of the above requirements is yet to be determined, including the frameworks for testing and certification.
Do you agree with the above requirements? We would be interested in hearing your views.