[co-author: Sedina Alicic]
In April 2018, as companies scrambled to come into compliance with the European Union’s General Data Protection Regulation that was soon to become enforceable, the EU quietly announced its intention to craft another massive regulatory mandate—this time in the field of artificial intelligence. As with GDPR, the EU sees itself as having a duty to protect its citizens from what it perceives as “risky” technologies, whether deployed by governments or companies, that could infringe on the fundamental rights of its citizens.
Though the EU’s efforts to develop rules around AI started with a whimper, the bang came in April 2021, when a proposed regulation was finally released. Tech insiders proclaimed the death of innovation—and industry groups and law firm blogs sounded the alarm that the proposed regulatory scheme would create a new, enormous and costly regulatory body to oversee these issues, further stifling innovation. On the other hand, regulators lamented that the proposal did not go far enough in protecting citizens. Regardless of which perspective is correct, for the reasons detailed below, the proposed regulations should matter to every company that develops software, not just those who call their product AI.
AI, Broadly Defined
In an effort to be future-proof, the proposed regulation includes an extremely broad definition of AI: “[S]oftware that is developed with one or more of the techniques and approaches listed … and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” Though the listed techniques include things normally classified as AI, such as supervised, unsupervised and reinforcement learning, they also include more common methods such as Bayesian estimation and search and optimization methods that are not usually thought of as exclusive to the AI realm. It also does not matter if these techniques are simply used as a component of a much larger system—they still fall under the regulation.
Therefore, any company that develops software needs to look carefully at its codebase to see whether it contains any of these techniques and approaches, as well as its products, to see if regulators may suspect that it does based on the software’s performance. In modern software packages, where recommendation engines, personalized experiences and automated processes are the norm, it may be impossible to think of a software package that does not contain at least one “AI” algorithm. Furthermore, the proposed regulations also cover users of AI in the European Economic Area, so they would apply to non-tech companies, such as retailers who deploy chat bots to assist their customers and advertisers who use targeting software.
Based on the GDPR process, we are some years away from final regulations being agreed or implemented. It is impossible to predict at this stage how much of the specifics of the current regulatory formulation will survive the process that lies ahead. However, looking at and understanding the general conceptual approach the EU has taken gives us a reasonable idea of what regulators are concerned about and where potential landmines may be placed.
The EU’s Risk-Based Approach to Regulating AI
The proposed regulation takes a risk-based approach to determine how to treat various AI applications. However, AI used only as a safety component of a larger product will still be governed under the existing product integrity regime.
- In the highest-risk category, and considered unacceptable risk, are two types of “harmful” AI practices the EU proposes to completely prohibit.
The first is AI that deploys subliminal techniques to materially distort behavior in a manner that causes a person physical or psychological harm, exploits vulnerabilities of a specific group or is used for social scoring. This category seems far more inspired by movies and TV shows than what is likely in real life, with the only example provided for it being “toys using voice assistance encouraging dangerous behaviour of minors.” However, the vague definition of this category is cause for concern, as it could potentially be stretched to cover a broad range of activities, particularly involving advertising.
Second, law enforcement’s use of real-time remote biometric identification systems, such as facial recognition technologies used for identification purposes in publicly accessible spaces, would be prohibited, with certain exceptions. This provision seems to be an effort to answer public concern regarding police surveillance, particularly in regard to the recent Black Lives Matter demonstrations. Due to the exceptions, both regulators and the public have called the current proposal’s measures on this front too lax, so it seems possible that the eventual regulation will only increase the level of scrutiny applied to these technologies. Companies working on such systems will want to follow the development of this regulatory regime closely or consider alternative uses of or customers for their technology.
- In the second “high-risk” category are AI uses that will be strictly regulated.
These cover a broad range of applications, including remote biometric identification, management and operation of critical infrastructure, determining access to or assessing students in education and vocational training, employee recruitment and monitoring, and access to essential private services and public services and benefits.
Here, the EU seems to be concerned about fairness, aiming at ensuring resources and opportunities are not allocated or denied to people arbitrarily or based on protected characteristics. In particular, given the widespread and ever-increasing use of algorithms in screening and assessing job candidates, companies that use or provide software used in making employment decisions or recommendations may want to take measures now to check for and reduce any unintended biases. In addition, this category covers the use or provision of credit scoring systems, another widespread application.
Action Items for Companies with Potentially “High-Risk” Software
What should a company do if it suspects its software might be interpreted as high risk? At the very least: be aware and pay attention. Depending on possible impact on a business, getting involved in the process may be prudent. Judging by the timeline of GDPR, which took four years to move from a draft proposal to adoption, implementation is years away. Given the many stakeholders that will seek to influence the proposed regulation, it is likely that the final regulation will look substantially different than what the EU has released this far. Nevertheless, it is unlikely that the EU will abandon this regulatory initiative. Many technologies and products could be impacted by whatever final regulations the EU ultimately adopts.
It should be noted that there are some industry best practices that have the potential to both improve a company’s products and ready those products for regulation, particularly in the realm of data governance. For maximum impact, companies can put checks into place to ensure the quality of their data sets, including that the variables they are using are relevant, their set is representative, and their data is free of errors. In doing so, companies should consider the demographics of their target market and the intended use for their AI systems to ensure they have the right data.
By actively examining their data for possible biases and identifying gaps in their data sets, companies can be better prepared to meet the requirements of any regulation that does come out of the EU and better understand their internal processes to determine how best to comply when the time does come to do so