Outlook on a common liability framework for high-risk AI systems in the EU

Hogan Lovells
Contact

Hogan Lovells

In May 2020, the European Parliament’s Committee on Legal Affairs took the initiative and published a draft report with recommendations to the Commission on a civil liability regime for artificial intelligence (AI) (the Draft Report). The Draft Report provides an outlook on some of the legal concepts that will be subject to discussion in a future legislative process.

Directive 85/374/EEC (PL Directive) and the Draft Report are considered by the Committee as two pillars of a common liability framework for AI systems and it acknowledges that such a project requires close coordination between all political participants. The Draft Report recommends drawing up an EU regulation on liability for the operation of AI systems and presents a proposal for such a regulation. The proposal suggests strict liability on the part of the "deployer" of certain "high risk" AI systems and an intensification of the deployer’s liability for other AI systems.

The debate on the appropriate legal concept for AI liability is ongoing. We therefore assume that the Draft Report will still be subject to amendments. Also, the proposal to implement the concept by way of a regulation, i.e. an act directly applicable in the Member States, could entail further discussions, as product liability was harmonized by way of the directive.

The amendments to the Draft Report are expected to be debated in the Legal Affairs Committee in late June or early July 2020. A vote on the report is scheduled for 28 September 2020, followed by a plenary vote in October 2020.

Proposed liability framework for AI systems

The Draft Report proposes a new liability regime for so-called "deployers" of AI systems. A "deployer" would be the person who controls usage, risks, and benefits of the operation of the AI system. Several deployers would be jointly and severally liable.

The concept is based on the distinction between AI systems considered “high-risk” and “other AI systems."

High-risk AI systems

With regard to high-risk AI systems, the proposal recommends strict liability of "deployers" for any harm caused by that system.

An AI system would be "high-risk" when it is sufficiently likely that it will cause personal injury in a random and unpredictable way. The evaluation considers three aspects: the probability of occurrence; the severity of expected harm; and the manner of use. For the sake of clarity, the proposal aims to exhaustively enumerate all high-risk AI systems in its Annex.

The concept envisages a maximum amount for compensation. Additionally, strict liability claims would be subject to special limitation periods. Furthermore, the proposal prescribes mandatory insurance for the "deployer."

Other AI systems

With regard to other AI systems that are not classified as high-risk, the proposal suggests maintaining fault-based liability as the regime for liability of the deployer. However, fault of the deployer will be presumed unless the deployer provides proof to the contrary. To that end, producers of AI systems would be obliged to collaborate with the deployer.

Producer’s liability

The proposal is not supposed to amend or supersede the PLD. It shall rather stand independently alongside for the purpose of the deployer’s liability. According to the Draft Report, its purpose is to close a perceived gap with regard to the liability for AI systems. Thus, the producer remains liable under the rules of the PL Directive. However, if the producer and the deployer of an AI system are the same person, the proposed deployer's liability shall prevail, according to the authors of the Draft Report.

Compared to the legal framework imposed by the PL Directive, the concept differs in some aspects. For example, the concept suggests introducing compulsory insurance of the deployer of high-risk AI systems.

What is the impact for digital health and the life sciences industry?

We recommend closely monitoring the ongoing discussion. The creation of new liability concepts and the potential for compulsory coverage must be taken into account for projects early on. The Draft Report itself considers liability risk to be “one of the key factors that defines the success of new technologies, products and services."

If the concept goes forward, it will be crucial which AI systems will be determined as “high-risk."

The current proposal would require providers of AI systems in the life sciences industry to check whether: (i) they fulfill the definition of “deployer”; and (ii) how their technology is classified. So far, only the transportation and assistance sectors have been included in the proposal. That could change in the future, given the impact of some medical AI applications on patients’ life and well-being.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide