On April 21, 2021, the European Commission released a highly-anticipated proposal for a regulation governing artificial intelligence (AI). The proposal has been drafted by the Commission and its advisers, and plays a central role in the Commission’s ambitious European Strategy for Data.
While the regulation has a long road before being finalized, businesses should be prepared for significant regulation in this space. Through this Alert, Foley Hoag intends to provide you with the basics of the proposal, including steps that your business can take now to prepare for the types of changes that this regulation will require.
As written, the proposed rules will cover:
How does the Commission define AI?
The term “AI system” has a broad definition: “software that…can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing environments they interact with.”
Are there any types of actions prohibited by the proposal?
Yes, the proposal lists a number of AI practices that are prohibited:
Is facial recognition covered by the proposal?
Yes, certain law enforcement uses of facial recognition systems intended for public spaces are covered under prohibited AI practices.
What areas are the main focus of the proposal?
The proposal focuses on “high risk” areas for AI. This includes:
How should businesses approach self-regulation of AI?
The draft regulation adopts a “development through deployment” approach. This means that high-risk AI systems are subject to scrutiny before they are placed on the market or put into service as well as throughout their life cycle.
What protocols or systems should businesses consider for their AI governance to meet regulatory benchmarks?
Businesses can start implementing internal governance processes now. According to the draft regulation, businesses should consider establishing a mandatory risk management system, strict data use and data governance requirements, technical documentation and record-keeping requirements, and post-market monitoring and reporting of incidents requirements.
How can businesses meet these requirements?
Businesses should consider external assessments for AI compliance. The draft regulation recommends a conformity assessment performed either by a third party or by the provider itself. The obligations for compliance under the proposal may affect all parties involved: the provider, importer, distributor and user of the AI.
What about transparency?
AI systems must be designed and developed in such a way that human oversight is guaranteed.
Additionally, there are special provisions in the regulation relating to transparency to ensure that people know they are dealing with an AI system, instead of human-centric decisionmaking processes.
Who is responsible for enforcement?
Preexisting regulation touching AI, like France’s facial recognition statutes, will need to be harmonized with the EU regulation when it is adopted.
If adopted, the regulation will be enforced by the EU member states. But in the future, the proposal foresees the establishment of a European AI Board that will be responsible for: assisting the national supervisory authorities and Commission to ensure the consistent application of the regulation; issuing opinions and recommendations; and collecting and sharing best practices among member states.
The regulation, once adopted, will come into force 20 days after its publication in the Official Journal. The provisions will be enforceable 24 months after that date, creating a long grace period to account for innovation that may occur between the drafting and the adoption.
What happens next?
The draft regulation will likely not be adopted for some time, given the complexity of AI and the number of nations and stakeholders involved. The European Parliament and Council will engage in further consideration and debate, and consider amendments.
However, this draft regulation is likely to set the benchmark for ethical AI. It should be considered carefully by companies in the absence of other guidance, to provide an indication of where national legislation and international consensus is moving on issues surrounding AI.