Robots and Liability: who is to blame?

Dentons
Contact

Dentons

[co-authors: Claudio Orlando Miele and Valeria Schiavo]

With this article we will address some legal issues concerning the liability regime concerning conducts featured by AI elements. In particular we will address the issue related to liability and robots, being understood that certain relevant principles may apply in general to all AI systems, such as machine learning and deep learning.

Currently EU laws do not include any type of ad hoc provisions for robots and, more in general, for AI. According to the resolution of the EU Parliament dated February 16, 2017 (the Resolution), which sets out recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL), robots cannot be held liable per se for acts or omissions that cause damage to third parties. The existing rules on liability “cover cases where the cause of the robot’s act or omission can be traced back to a specific human agent such as the manufacturer, the operator, the owner or the user and where that agent could have foreseen and avoided the robot’s harmful behaviour.”

In this respect, the EU Commission Working Document on Liability (issued on April 25, 2018) for emerging digital technologies, such as robots operating through AI systems, underlines that we are facing a regulatory gap. Indeed:

  • Where a robot takes autonomous decisions, according to the Resolution “the traditional rules will not suffice to give rise to legal liability for damage caused by a robot, since they would not make it possible to identify the party responsible for providing compensation and to require that party to make good the damage it has caused.” In fact, emerging digital technologies are featured by an inter-dependency among the different hardware, software and components/layers - such as “i) the tangible parts/devices (sensors, actuators, hardware), ii) the different software components and applications, to iii) the data itself, iv) the data services (i.e. collection, processing, curating, analysing), and v) the connectivity features” -, which may impair the prediction of possible outcome/developmentsof a technology before its launch in the market.
  • Lastly, digital technologies change continuously, due to software extensions, updates and patches after their launch into the market / deployment in production. Any change to the software “may affect the behaviour of the entire system components or by third parties, in a way that can affect the safety of these technologies.” Therefore, it is crucial to address responsibilities among the various actors of the AI value chain.

Responsibility may be identified upon robots’ manufacturers pursuant to the provisions implementing the Product Liability Directive no. 85/374/EEC. Such Directive is based on strict liability (responsabilità oggettiva) of producers of defective products also in the event of personal injury or damage to property. According to some commentators, there are grounds to argue that the Product Liability Directive may apply to robots causing damages to individuals/goods: for instance, where the producer did not properly inform the customer of dangers associated with the autonomous robot or whether the robot’s security systems were deficient.

Furthermore, we note that in various civil law countries the “strict liability” doctrine (responsabilità oggettiva) is the prevailing reference. The strict liability doctrine provides that it is necessary to prove that (a) damage occurred; and (b) such damage has been caused by conduct/omission of the damaging party, so that there is no need to prove the negligence / willful misconduct of the damaging party (generally requested for torts). In Italy, for instance, it has been suggested the application of the strict liability rules concerning the responsibility of a person carrying out a dangerous activity (Article 2050 of the Italian Civil Code), or the responsibility of parents/tutors/guardians/teachers for damages caused by a minor, pupil, student/apprentice or mentally impaired person (Articles 2047 and 2058 of the Italian Civil Code).

There are some open-ended questions that are yet to be addressed fully. Once the (legal) person responsible for the damage has been identified (the AI manufacturer, the programmer, the supplier or the user), his/her responsibility should be proportional to the “degree of autonomy” of the robot / AI system? How to properly address the degree of autonomy of the robot / AI system?

Lastly, some commentators prompt for the creation of a “quasi-legal” personality for robots (e-Person), which could protect manufacturers and users against liability (similarly to the autonomous liability of companies, which is distinct from the liability of company’s shareholders). Such creation may only materialize in the medium/long term, since it would also imply a substantial and broader cultural shift towards technologies’ and AI products.

That said, it is not possible to predict how the legislation on AI and liability will evolve, although most commentators rely upon the strict liability doctrine as the key driver to foster the ongoing legislative process. 

AI robots and liability is a fascinating topic, and we will further address the main developments.

 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dentons | Attorney Advertising

Written by:

Dentons
Contact
more
less

Dentons on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide

This website uses cookies to improve user experience, track anonymous site usage, store authorization tokens and permit sharing on social media networks. By continuing to browse this website you accept the use of cookies. Click here to read more about how we use cookies.