Should we regulate artificial intelligence?

Hogan Lovells
Contact

Hogan Lovells

Hogan Lovells partner Winston Maxwell spoke on October 12, 2017 at a conference on artificial intelligence organized by the French think tank “Le Club des Juristes”. What follows is an English version of his prepared remarks.

Artificial intelligence (“AI”) permits valuable new applications for society. Autonomous vehicles will increase safety and reduce pollution. Voice recognition could make computer keyboards obsolete. AI will advance medical science, and help manage the consequences of global warming. These applications generate huge value for society, but can create new risks, including disruptions in the workforce (I won’t address disruptions in the workforce here, but recommend a study on the subject by the National Academies of Sciences).

Most of the risks and harms associated with AI are linked to how AI is used, not to AI itself. The regulatory responses should also be focused on AI uses, not on AI. AI in autonomous vehicles should be regulated through car safety rules. AI in banks should be regulated in through banking regulations, AI in home devices through product safety rules. These regulatory risks, and the appropriate regulatory response, are easy to grasp.

More challenging are the questions linked to AI and fundamental rights. Does AI create a risk for fundamental rights and, if so, what is the appropriate regulatory response? Let me mention three fundamental rights that are often discussed in the context of AI:

  • the right not to be discriminated against;
  • the right to privacy;
  • freedom of expression.

AI and discrimination.

AI can help make decisions based on historical data. However, the outcome of the data analysis may yield results that are socially unacceptable. The algorithm may predict that I’m a bad credit risk because I grew up in a certain part of Oregon, or because my parents were born in another country. In most instances, the algorithm itself is not the origin of the bias. The problem relates to the data that are analyzed. Artificial intelligence analyzes historical data from real life. Data from real life is messy, and reflects the biases and bigotry of human society — “garbage in, garbage out.”

The designers of artificial intelligence systems are working on solutions to the problem. In an ideal world, we would analyze data from the world as we would like it to be, not the world as it actually exists. The answer may be to ensure that decisions that result from AI are checked by humans before they create effects for an individual. This kind of human review is precisely what Article 22 of the GDPR (EU Regulation 2016/679) attempts to do. The GDPR gives individuals an absolute right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects. In addition, existing laws prohibit all forms of discrimination based on sex, skin color or religion whether in a work place or elsewhere. The existing legal mechanisms are not perfect, but they do exist.

AI applications will merit testing and risk assessments before they are deployed, to anticipate potential problems such as illegal discrimination. Article 35 of the GDPR requires data protection impact assessments (DPIA) to be conducted for any risky processing. These impact assessments should be expanded to cover other risks associated with AI, such as risks of discrimination.

AI and data protection.

The analysis of how AI might harm privacy and data protection rights is complicated by the broad range of different harms that privacy and data protection cover. One harm is the violation of our private life. For example, if artificial intelligence somehow concludes that I suffer from a particular disease, even without having access to my medical records. The conclusion made by AI may be, in fact, correct, but it invades a part of my private life that society agrees should be shielded from the outside world.

Another harm that data protection regulation is intended to address is the erosion of individual autonomy leading to a breakdown of what Professor Karen Yeung of King’s College calls the “democratic commons.”[1] This concept is linked to the concept of informational auto-determination, which posits that individuals should remain masters of their own data because otherwise we would lose part of our freedom over our own lives.  The GDPR attempts to deal with this concept by giving certain rights to individuals regardless of whether they suffer a harm. But this is a hard concept to apply in practice. Individuals may object as a matter of principle to their data being used for machine learning, but still expect a machine to recognize their voice and understand what they want!

The GDPR does not solve the problem of how individual rights to privacy should be balanced with the societal benefits that flow from AI innovation. Article 89 of the GDPR leaves it to member states to figure out how to strike the right balance between the individual and societal benefits of scientific research, including AI. This approach poses an issue for France because major discoveries based on AI may require the use of personal data, and the social benefits that flow from those discoveries may be enormous, outweighing the individual rights to privacy that are interfered with.

Often policymakers avoid the question entirely by saying that big data innovation should rely solely on non-personal data. However, that it not a satisfactory answer because almost any data involving human behavior can be considered personal data, and anonymization is becoming increasingly difficult. French researchers are using MRI images that are “de-identified” under US law standards, but would not be considered anonymized under the European standards.

A better response to this challenge of balancing individual and societal benefits is to admit that in some cases personal data must be used for AI applications that generate a high public benefit and then develop a framework for making sure that sufficient safeguards are put into place to mitigate the adverse consequences for the data subjects.

AI and freedom of expression.

Freedom of expression is one of the pillars of democratic society because without it, no other right could exist. Artificial intelligence can have adverse effects on freedom of expression because AI can anticipate the kind of information that you like and simply feed you more of the same. This is called the “filter bubble” effect, which can lead to increasing polarization of society and the absence of democratic debate.

This “bubble effect” is a hard problem to solve, but I suggest that the problem is broader than just a debate about AI. This issue instead relates to how the state should intervene to help make sure that the marketplace of ideas functions properly. Generally, the state is the last person stakeholders would want to intervene in a marketplace of ideas because the state is, for many people, the most dangerous monopolist[1]. In the age of analog television and radio, media regulators helped ensure that citizens received a diverse set of viewpoints on topics of interest to the public. In the digital age, providing viewpoint diversity is much more difficult given the diversity of content available. How do you encourage citizens to explore all areas of a vast public library? Many countries are looking at how public service broadcasters can fulfill their public service role in an online environment. The regulatory debate should center on the future of media regulation, not on the regulation of AI.

Experimental regulation

Let me close by recommending an experimental approach to regulation that is being applied in other countries and that we should consider more in France. When dealing with fast-moving technology and regulatory risks, one technique is to allow innovative projects to go forward in a controlled environment. You then observe the results and determine whether the regulatory approach should be adjusted or generalized. In the US and the UK, they call this a “regulatory sandbox.”

In the Conseil d’Etat‘s 2016 annual study, the Conseil d’Etat recommended more use of this “regulatory sandbox” approach in France. The study’s authors observed that the French Constitution was modified to permit regulatory experiments to occur. This approach is in line with better regulation theory developed by the OECD and the European Commission, which tends to treat regulation as a form of social medicine, which needs to be tested in clinical trials before it is rolled out more broadly. Experimentation may be the best approach to challenging new subjects involving artificial intelligence, and would also permit innovation to occur in controlled environments.

 


[1]              K. Yeung, “Making sense of the European data protection law tradition,” in Algorithmic Regulation, Centre for analysis of risk and regulation, Discussion Paper no. 85, September 2017.

[2]              R. Posner, “Economic Analysis of Law”, 2011.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide