European Commission Proposes Reform on Liability Rules for Artificial Intelligence

Latham & Watkins LLP

The directives aim to assist claimants in proving the causation of damages and product defectiveness in complex AI systems, creating legal certainty for providers.

On 28 September 2022, the European Commission issued two proposed directives to reform and clarify liability rules on artificial intelligence (AI):

  1. The Directive on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence (AI Liability Directive) introduces rules on evidence and causation to facilitate civil claims for damages in respect of harm to end users caused by AI systems.
  2. The Directive on Liability for Defective Products (Revised Product Liability Directive) seeks to repeal and replace the 1985 Product Liability Directive (Directive 85/374/EEC) with an updated framework to better reflect the digital economy. The Revised Product Liability Directive proposes to explicitly include AI products within the scope of its strict liability regime and to modify the burden of proof for establishing defectiveness of technically or scientifically complex products like AI systems.

Background

The European Commission considers that recent developments in AI have exposed shortcomings within the EU’s civil liability rules. The Commission outlined the following key reasons for proposing the new directives:

  • Complexity of AI systems: Existing liability rules often require claimants to prove that there was a wrongful action or omission and that such action or omission caused the claimant’s loss. However, the fundamental complexity and difficulty in explaining some AI systems can hinder claimants from understanding or proving a causal link between the producer’s fault and their loss.
  • Harmonisation of national laws: National courts will increasingly have to hear disputes involving AI systems. Rather than leaving these courts to develop the law in this area in a piecemeal and fragmented fashion, the European Commission wishes to introduce a set of harmonised rules.
  • Legal certainty: Establishing clear rules on liability for AI systems will (i) create legal certainty for businesses, thereby driving further investment in AI and (ii) increase societal trust in AI technologies among end users.

AI Liability Directive

The AI Liability Directive complements the European Commission’s proposed Regulation on Artificial Intelligence (AI Act) (for further information on the AI Act, see Latham’s briefing). Whereas the AI Act aims to classify AI systems by risk and regulate them accordingly, the AI Liability Directive seeks to impose liability if these risks have materialised into harm affecting end users.

Notably, the AI Liability Directive would allow national courts to compel providers of high-risk AI systems to give relevant evidence to claimants about a specific system that is alleged to have caused damage. This rule may apply if: (i) the claimant presents sufficient facts and evidence to support the claim for damages; and (ii) the claimant shows that they have exhausted all proportionate attempts to gather the relevant evidence from the defendant. Access to evidence would allow claimants to decide whether their claim is well-founded and, if so, how to substantiate their claim for damages. These disclosure powers under the AI Liability Directive dovetail with the transparency, audit, and recordkeeping obligations proposed by the AI Act.

Additionally, the AI Liability Directive introduces a presumption of causation between the defendant’s fault and the damage caused to a claimant by the AI system. This presumption would apply if the following three conditions are met:

  • The claimant has shown that the defendant failed to comply with a duty of care intended to protect against the damage that occurred, including a failure to comply with relevant obligations under the AI Act;
  • It can be considered reasonably likely based on the circumstances of the case that the fault influenced the output produced by the AI system, or the AI system’s failure to produce an output; and
  • The claimant has shown that the output of the AI system, or the AI system’s failure to produce an output, gave rise to the damage.

However, the presumption would not apply in relation to AI systems categorised as high-risk under the AI Act[1] if the defendant shows that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link between fault and damage. For AI systems that are not categorised by the AI Act as high-risk, the presumption of causality would only apply if national courts consider it excessively difficult for the claimant to prove the causal link between fault and damage. The defendant has the right to rebut the presumption of causality by showing that its fault could not have caused the damage.

Revised Product Liability Directive

The Revised Product Liability Directive aims to modernise the Product Liability Directive by taking account of software and digital products. It retains the Product Liability Directive’s strict or no-fault liability regime, which means that manufacturers of defective products would be held liable without the need for claimants to establish any fault on the manufacturer’s part. In contrast, the AI Liability Directive, as discussed above, uses a “fault-based” regime.

Under the Revised Product Liability Directive, AI systems and AI-enabled goods fall explicitly under the scope of regulated products, thereby including them into the no-fault liability regime. The Revised Product Liability Directive also provides that hardware manufacturers, software producers, and providers of digital services that affect how a product works could also all face liabilities for any product defects.

The Revised Product Liability Directive further seeks to place ongoing liability on such manufacturers, software producers, and providers of digital services once the AI systems are on the market. These operators would remain liable if defectiveness results from a related service, software updates or upgrades, or a lack of software updates or upgrades required to maintain safety, to the extent the foregoing are within the relevant operator’s control.

The Revised Product Liability Directive also provides that if a national court determines that technical or scientific complexity makes it excessively difficult for claimants to prove a product’s defectiveness or the causal link between the defect and the damage, then such defectiveness or causation can be presumed. This presumption could apply provided the claimant can prove that: (i) the product contributed to the damage; and (ii) the product was likely defective or its defectiveness is a likely cause of the damage, or both.

Next Steps

The European Commission intends the AI Liability Directive and the Revised Product Liability Directive to complement each other in building a liability framework for AI systems and AI-enabled products, alongside the broader regulatory framework under the AI Act. This set of EU AI legislation currently sits at various stages of the EU legislative process and will be subject to further negotiation and potential amendment before coming into force. Questions remain in terms of how the strict liability and the fault-based regimes will operate side-by-side in practice; how these liability regimes will relate in practice to the AI Act’s proposed conformance framework for AI systems; and how market practice will develop for the allocation of liability along AI and data supply chains.

In the UK, the government has rejected comprehensive regulation of AI and AI liability in favour of an iterative, sector-based approach building on current regulation and guidance. For an overview of the UK approach, see Latham’s briefing on the UK’s AI Strategy.

This post was prepared with the assistance of Clarence Cheong in the London office of Latham & Watkins.

Endnote


[1] The AI Act categorises AI systems as follows, based on perceived risk to individuals: prohibited, high risk, limited risk, and minimal risk AI systems. For further information on each of these risk categories and their associated obligations and requirements under the AI Act, see Latham’s briefing.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Latham & Watkins LLP | Attorney Advertising

Written by:

Latham & Watkins LLP
Contact
more
less

Latham & Watkins LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide