Changing The Safety And Liability Rules On AI – What Is The European Commission planning?

Morrison & Foerster LLP
Contact

Morrison & Foerster LLP

A White Paper on Artificial Intelligence from the European Commission provides insight on how governments might change product safety and liability rules to address the issues arising from AI systems.

Five years ago, the European Commission (EC) established a key policy agenda designed to deliver significant legal changes to the “Digital Single Market” by 2019. That agenda led to a series of material regulatory changes across the EU.

Now, the EC has established the creation of a “Europe fit for the digital age” as a key political goal and has published a series of documents intended to shape Europe’s digital future. Two of these documents relate to AI systems: a white paper titled “On Artificial Intelligence – A European approach to excellence and trust” and a report on the safety and liability implications for AI, the Internet of Things and robotics (the “Reports”).

Among other issues, the Reports discuss new proposals for changing the regulatory framework on product safety and liability in the EU to address the changes brought by AI systems.

What are the current EU rules on product safety and liability?

As identified in the Reports, and in an EC expert report on AI published last year, the EU has regulated product safety and liability in three ways:

  • the EU Product Liability Directive;
  • the EU General Product Safety Directive (GPSD); and
  • by regulating specific sectors (e.g., motor vehicles).

The EU Product Liability Directive imposes liability for any damage caused by a defective product. Any injured individuals must show a causal link between the damage and the defect, but they do not have to prove the negligence or fault of the producer or importer. There are certain exemptions to this regime, including a defense for producers if the defect appeared after the product has entered into circulation.

The GPSD only applies when a product is not subject to a sector-specific safety regime, such as the regime for medical devices. It places a requirement on producers and distributors not to put any product on the market unless it’s safe. For both producers and distributors, it is a criminal offense to put an unsafe product on the market; however, a distributor must have known or should have known that the product was unsafe in order to breach this requirement.

Additional rules on product safety currently vary by EU Member State.

What do the Reports recommend changing?

We have summarized seven of the Reports’ key recommendations in relation to AI product safety and liability below:

1. Including software within the scope of product regulation

The Reports note that software is a key part of any AI system, and that the existing EU product safety regime only takes into account the risks stemming from software integrated in a product at the time it goes to market. Therefore, the Reports consider whether requirements should be introduced for ensuring the safety of stand-alone software applications.

2. Identifying certain AI systems as “high-risk” and making such systems subject to a more stringent regulatory framework

The Reports consider that certain AI systems pose additional risks for EU citizens and recommend that producers of certain high-risk AI systems should be required to ensure that those systems meet a “conformity assessment” before being put on the market. The Reports discuss that the assessment should facilitate:

  • repeat assessments, taking into account how AI systems evolve over time;
  • the verification of training data; and
  • ways to remedy identified shortcomings.

High-risk AI systems may include systems that:

  • impact employment equality; or
  • use remote biometric identification and other “intrusive surveillance technologies”.
3. Apportioning responsibility to address risk in AI systems to the actors best placed to address such risks

The Reports propose that obligations to ensure the safety of an AI system should be distributed across the different economic actors involved in its supply chain; including developers, distributors, service providers and even users. The EC believes that each obligation should be addressed to the actors who are best placed to address any potential risk. This reflects a shift from the existing regime, which targets producers and importers.

4. Requiring products to undergo a risk assessment at different points in the product’s lifecycle

As AI systems commonly undergo continuous development after they have entered the market, the Reports consider whether the concept of “putting into circulation” in the EU Product Liability Directive should be revisited to take into account how AI systems may change over time.

5. Introducing specific requirements to address the risks associated with faulty training data

The EC notes that EU product safety legislation does not address the risks that are derived from the use of faulty training data (e.g., the Reports provide an example of a computer vision system that is not trained to detect objects in poorly lit environments). Therefore, the Reports consider whether specific provisions are required to address the risks of faulty training data during the design phase of an AI system and whether additional provisions are needed to maintain the quality of training data while the product is in use.

6. Requiring developers to disclose the design parameters of algorithms and metadata of datasets in the event of accidents

To address the issue of the “black-box effect” noted in AI systems which makes it difficult for users to trace the decisions made about them by AI systems, the EC states that it is necessary to consider implementing requirements to improve the transparency of algorithms. The Reports state that one way to tackle this issue would be to require developers to disclose the design parameters and metadata of datasets in the event of accidents caused by AI systems.

7. Reversing the burden of proof, strict liability and insurance

The EC notes that it is seeking views on whether it is required to reverse the burden of proof required under national liability rules for damage caused by AI systems, through an EU initiative.

For AI systems with a specific risk profile, the EC considers whether strict liability may be appropriate, coupled with the requirement to obtain appropriate insurance. This would follow the existing requirements under the Motor Insurance Directive, where drivers are required to insure their cars to ensure that individuals receive compensation in the event of an accident.

What action should my organization take based on the Reports?

Organizations that develop, supply or use AI systems should continue to monitor the EC’s progress on introducing regulation on these issues. Once the EC publishes more concrete proposals, it may also be useful for organizations to:

  • consider their processes for developing and licensing AI system in order to meet any new product safety requirements; and
  • review how they apportion liability and risk with their suppliers and customers in agreements.

London-based Trainee Solicitor Danial Alam contributed to the writing of this Alert.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Morrison & Foerster LLP | Attorney Advertising

Written by:

Morrison & Foerster LLP
Contact
more
less

Morrison & Foerster LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide