Fair and unfair algorithms: What to take into account when developing AI systems to fight COVID-19



The regulatory framework includes a number of sources from which to draw inspiration when developing AI technology. One of the most recent ones, the White Paper on Artificial Intelligence of the European Commission, is aimed at defining the risks associated with the implementation of AI systems, as well as determining the key features that should be implemented to ensure that data subjects’ rights are complied with (please see our articles The EU White Paper on Artificial Intelligence: the five requirements and Shaping EU regulations on Artificial Intelligence: the five improvements for a more detailed analysis).

It is worth noting that, particularly in relation to the development of AI technologies to fight the pandemic, the legislator is required to pay great attention to the principles and security systems. Risks associated to AI relate both to rights and technical functionalities. EU member states intending to use AI against COVID-19 will also need to ensure that any AI technology is ethical and is construed and operates in a safe way.

With regards to ethics, it is worth noting that the European Commission issued Ethics Guidelines for Trustworthy AI in April 2019. Those guidelines stressed the need for AI systems to be lawful, ethical and robust (more particularly, AI should comply with all applicable laws and regulations, as well as ensure adherence to ethical principles / values and be designed in a way that does not cause unintentional harm).

With the aim of ensuring that fundamental rights are complied with, the legislator should consider whether an AI system will maintain respect for human dignity, equality, non-discrimination and solidarity. Some of these rights may be restricted for extraordinary and overriding reasons – such as fighting against a pandemic – but this should take place under specific legal provisions and only so far as is necessary to achieve the main purpose. Indeed, the use of tracking apps and systems that profile citizens in order to determine which ones may suffer from COVID-19 entails the risk that an individual’s freedom and democratic rights could be seriously restricted.

With regards to the development of an AI technology, the legislator should ensure that biases are limited as their effect is much larger when used in AI. In this respect, with the recent White Paper on Artificial Intelligence, the European Commission identified some key features that should be taken into account when designing high-risk AI applications, from broad training data and record keeping to information, human oversight and prior approvals and certifications (see also The EU White Paper on Artificial Intelligence: the five requirements for an overview).

How can unfair algorithms impact on citizens’ rights: Is GDPR able to protect citizens?

In light of the above examples, it is evident that any use of an AI system in the fight against the pandemic necessitates a careful consideration of fundamental principles and requirements, well beyond the fundamental data protection principles (please see our article Using artificial intelligence against the spread of COVID-19 , focused on how AI is supporting the fight against the virus).

In this regard, the Committee of Ministers of member States issued on April 8, 2020, its Recommendation CM/Rec(2020)1 on the human rights impacts of algorithmic systems. The purpose is to advise states and public and private sector actors when designing, developing and deploying algorithmic systems, to ensure their compliance with human rights and fundamental freedoms. Among the recommendations, it is worth noting that the European Council has encouraged member states, in relation to the drafting of policies and legislation, to consult with all relevant stakeholders and affected parties, as well as to foster general public awareness of the impact of algorithmic systems.

That said, the GDPR should not be disregarded. Useful inspiration can be drawn from the letter published in January 2020 by the European Data Protection Board (EDPB), addressing the “unfair algorithms” which, like many other AI systems, may lead to discrimination or other negative effects.

According to the EDPB, the GDPR constitutes a robust legal framework to protect citizens’ data protection rights. Indeed, the GDPR is per se technologically neutral, and is aimed at handling future technological developments. In particular, the EDPB highlights the importance and effectiveness of the following GDPR provisions to address potential risks and challenges associated with the processing of personal data through algorithms (and AI systems):

  • general principles of a risk-based approach, data protection by design and by default (Art. 25 GDPR);
  • general data protection principles, such as lawfulness, fairness and transparency, accuracy, data minimization and purpose limitation (Art. 5 GDPR);
  • need for a Data Protection Impact Assessment (i.e. DPIA, Art. 35 GDPR);
  • the right not be subject to a decision based solely on automated processing (Art. 22 GDPR).

As very recently confirmed by the EDPB, algorithms used in contact tracing apps should work under the strict supervision of qualified personnel in order to limit the occurrence of any false positives and negatives, and the task “to provide advice on next steps” should not be automated. In this respect, the EDPB will issue specific guidelines on geolocation and tracing tools in the context of the COVID-19 pandemic.

That said, according to the EDPB the focus should be on the (hard and soft) enforcement of GDPR provisions, rather than the enactment of an ad hoc legal framework dealing with unfair algorithms (for further information on the privacy implications of AI systems, see also our previous articles: Artificial Intelligence vs Data Protection: which safeguards? and Artificial Intelligence vs Data Protection: the main concerns). Enforcement may be dealt with at different levels, from the issuance of sanctions pursuant to GDPR, to advocacy by EU and member states’ institutions, informing the general public on their data protection rights, engaging with stakeholders and launching public consultations.

Nevertheless, the EDPB is aware that the use of ‘unfair’ algorithms, like any other AI system, may give raise to legal issues other than those related to data protection. There is no one-size-fits- all. All envisaged anti-COVID-19 solutions will require an ad hoc review as well as an interdisciplinary approach (see also our article AI Data Lakes: top five issues to consider).


As the European Council said: “In the delivery of public services and in other high-risk contexts in which states use such technologies, methods such as alternative and parallel modelling should be performed in order to evaluate an algorithmic system and to test its performance and output”. Regulating AI systems in Europe requires a broad approach, taking into account specific critical implications beyond data protection concerns. Member states should carefully assess whether alternative and parallel approaches can be used, always bearing in mind the protection of fundamental rights.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dentons | Attorney Advertising

Written by:


Dentons on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide

This website uses cookies to improve user experience, track anonymous site usage, store authorization tokens and permit sharing on social media networks. By continuing to browse this website you accept the use of cookies. Click here to read more about how we use cookies.