Opinion of the European Data Protection Supervisor on the AILD and PLD Proposals

Hogan Lovells

The European Data Protection Supervisor and the European Economic and Social Committee welcome the PLD and AILD Proposals which aim to facilitate liability claims against AI providers and/or users. The EDPS, however, notes that the current measures proposed may not be sufficient to achieve the objectives of the directives and recommends certain amendments to extend the protections afforded to injured parties. The EESC, on the other hand, considers the measures provided for in the AILD Proposal to be largely sufficient and emphasises the need for clear legal definitions and monitoring after entry into force of the directive. In this publication, we take a closer look at the main recommendations outlined in the EDPS’ opinion in the context of the EESC’ opinion and the current legislative process of both proposals.


The European Data Protection Supervisor (EDPS) published its opinion on the European Commission’s proposals for a revised Product Liability Directive (PLD Proposal) and an AI Civil Liability Directive (AILD Proposal), which aim to bring product liability rules into the digital age. The EDPS opinion, published on 13 October 2023, complements the opinion published by the European Economic and Social Committee (EESC) on 24 January 2023, which considers the AILD Proposal and its aim of adapting non-contractual civil liability rules to artificial intelligence.

In his opinion, the EDPS, Wojciech Rafal Wiewiórowski, notes that with the rapid increase in the complexity of AI systems and associated challenges, existing liability regulations may not be equipped to adequately deal with claims for damage caused by AI systems and therefore, an adapted liability regime is needed to ensure that victims benefit from the same level of protection when harmed by AI products or services, as they would if harm was caused under any other circumstances.

Whilst the EDPS welcomes and supports the AILD and the PLD Proposals, it recommends certain amendments in order to achieve the EC’s objectives, in particular: (i) the extension of the rights and protections of victims of AI damage in liability claims and (ii) the explicit regulation of the relationship between the AILD Proposal and EU data protection law.


Consumers’ rights and AI liability claims

In order to afford equivalent protection for all injured parties in the event of damage caused by AI systems, the EDPS recommends that the AILD Proposal is amended as follows:

  • Procedural safeguards for all AI systems, irrespective of their classification: Most of the procedural safeguards in the AILD Proposal, including the right of access to evidence and the rebuttable presumption of causality in Articles 3 and 4 of the AILD Proposal apply only to high-risk AI systems. These are AI systems that are likely to have a negative impact on security or fundamental rights, as defined in the proposed Artificial Intelligence Act (AI Act Proposal). The EDPS criticises such limitation and calls on the co-legislators to separate procedural safeguards from the risk-based classification of AI systems. The EDPS notes that non-high-risk AI systems also have the potential to cause significant harm to individuals and such victims would face the same evidentiary problems as those affected by high-risk AI systems. The EDPS, in agreement with the EESC, considers that the distinction between high-risk and non-high risk AI systems as to the applicability of procedural safeguards is an obstacle to achieving equal protection for all injured parties.
  • Evidential value of technical information: Although the AILD Proposal provides for the right to access to evidence, the EDPS notes the difficulty of using the technically complex and non-transparent internal processes of AI systems for evidentiary purposes. To address this issue, the EDPS suggests the adoption of a similar approach to that of the AI Act Proposal (Article 13 (2)), with an obligation for providers to provide users with “concise, complete, correct and clear information that is relevant, accessible and comprehensible”. The EDPS notes that with a disclosure requirement not limited to the mere technical documentation, the information disclosed under the AILD Proposal would have a more effective evidential value.
  • Further easing the burden of proof for claimants: The EDPS criticises the choice of a fault-based liability system. Under the current AILD Proposal, injured parties would still have to prove fault or negligence on the part of the defendant, as well as causality between such fault and the damage suffered. According to the EDPS and representatives of consumer interest groups, proving fault would still will remain difficult for injured parties as a "prevailing information asymmetry between consumers and professionals" remains. Whilst consumer organisations go as far as to recommend a reversal of the burden of proof to the extent that the injured party would only have to prove the existence of damage, with the defendant having to prove that the damage was caused without a fault of his own (or face being liable by default), the EDPS simply calls on the co-legislators to consider restructuring the current rules towards a more “fairer and balanced” system of liability. The EESC on the other hand, welcomes the fault-based liability system and appreciates "the balance the directive strikes between victims' rights and the interests of AI developers". The EESC values the development of AI technology and believes that the adoption of a moderate presumption rather than strict liability will open more opportunities for developers.
  • AI produced or used by EU institutions, bodies and agencies: The EDPS notes that both the AILD and PLD Proposals appear to not apply to damages caused by AI systems produced or used by EU institutions, bodies and agencies, while the AI Act Proposal does apply to such EU institutions. Therefore, any damage caused by AI produced or used by EU institutions could only be compensated in accordance with the prescriptions set out in Art. 340 (2) Treaty on the Functioning of the European Union (TFEU), which does not provide for any comparable procedural safeguards as in the AILD Proposal. In order to remedy this potentially unequal treatment, the EDPS proposes the introduction of a similar level of protection for damage caused by AI systems produced or used by EU institutions, bodies and agencies within the PLD and AILD Proposals.
  • Risk of circumvention and automation bias: In order to maintain legal certainty, the EDPS recommends the deletion of the last two sentences of Recital 15 of the AILD Proposal1 which address AI liability claims when the damage is caused by a human assessment followed by a human act or omission. The EDPS fears that it may pose a risk for circumvention of the AILD Proposal by including a human actor to “rubber-stamp” outcomes of AI systems without meaningful additional merit (automation bias).
  • Shorter review period: Both EESC and EDPS recommend shortening the review period currently put forward in the AILD Proposal from five years to three years so as to take into account constantly evolving AI developments and ensure a timely understanding as to whether injured parties of AI-caused damage indeed have an effective access to compensation.

The PLD and AILD Proposals and EU data protection law

The EDPS also underlines the importance of explicitly confirming that the AILD Proposal does not affect the applicability of Union law on data protection. The opinion notes that the PLD Proposal contains an explicit reference to this effect in Article 2 (3) (a), while the AILD Proposal is silent on the subject, which could lead to legal uncertainty. Therefore, the EDPS stresses the need to amend the AILD Proposal in order to ensure that remedies provided by EU data protection law are not limited.

Furthermore, the EDPS notes that the definition of "damage" in the PLD Proposal could include material damages which might be caused by the loss or corruption of data. This leads to potential situations where the relevant data also constitute personal data under the General Data Protection Regulation (GDPR). In these cases, a consumer who has suffered the loss or corruption of personal data could be entitled to choose between claims under the PLD Proposal or the GDPR, or to claim on both. The EDPS welcomes this accumulation of rights and suggests to include this principle in the AILD Proposal as well.


EESC’s further recommendations

The EESC’s opinion contains certain additional recommendations on the AILD Proposal, which are not reiterated by the EDPS. In particular, the EESC stresses the need for clearer legal definitions in order to mitigate different interpretations by stakeholders and judges across the EU. It defines the ultimate goal of the AILD Proposal as the development of a liability regime that is applied as uniformly as possible throughout the Union. In order to achieve this, the EESC recommends that the expertise of those responsible for the application of the regulations should be further enhanced with appropriate digital capacity and training.

In addition, the EESC focuses on the evaluation of the AILD Proposal after its entry into force. In addition to shortening the review period to three years, the EESC proposes to closely monitor the development of financial guarantees and (compulsory) insurance covering AI liability. In order to assess the need for legislative action in this area, the EESC recommends that incidents involving AI systems should be documented and reported so as to enable information gathering and the subsequent evaluation of the Directive. Finally, the EESC also recommends the establishment of a network of alternative dispute resolution bodies to help consumers exercise their rights under the AILD Proposal.


Next steps

The EESC and EDPS have issued their opinions on the AILD Proposal as a result of internal discussions within the Council and its preparatory bodies. Once the Council and the European Parliament adopt their common positions, negotiations will start. For the time being, discussions on the AILD Proposal are deferred to a later stage, pending the progress of the closely related AI Act Proposal.

The AI Act Proposal was s scheduled to be discussed on 6 December 2023 and though some compromise was reached it remains to be seen how the finalization will progress.

The PLD Proposal, on the other hand, is in full steam ahead now that the Council and the European Parliament have adopted their negotiating mandates and trialogue negotiations started in October 2023 with the aim of reaching agreement on the final text of the directive. The next trilogue is planned for 14 December 2023.

Hogan Lovells is actively monitoring developments in this space – keep an eye out for our future updates.

*Thanks to Elisabeth Hertel for her support.


References

1 "There is no need to cover liability claims when the damage is caused by a human assessment followed by a human act or omission, while the AI system only provided information or advice which was taken into account by the relevant human actor. In the latter case, it is possible to trace back the damage to a human act or omission, as the AI system output is not interposed between the human act or omission and the damage, and thereby establishing causality is not more difficult than in situations where an AI system is not involved."

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide