Who would be responsible if KITT, the supercar of the famous TV series, were to cause an accident: the company that produced the car, the driver or maybe KITT itself?
It’s unlikely that this question occurred to us while watching Knight Rider in the 80s, but today it is a legal dilemma carrying immense stakes.
A few weeks ago, Nikkei Asia published an article that anticipated Honda would include in one of its most popular models a “level 3” driving assisting system. The Japanese company is merely the latest entry in a competition that began long ago in the US between Tesla and Alphabet, back then a bet between nerds that aimed to transform the traditional automobile into a four-wheeled computer
The disruption these tech titans caused in the car industry is now apparent. An apt symbol of this transformation was the presence of Mary Barra, Chairman and CEO of General motors, not at a traditional car show, but at the 2021 Consumer Electronics Show in Las Vegas. CES invited the leader of the Chevy manufacturer to give the keynote address at the convention. In the speech, she reinforced GM’s commitment to the production of electric, self-driving cars, an ethos powerfully summarized in her vision of a world with “Zero crashes, zero emissions, zero congestion”
A mobility revolution similar to the one Henry Ford generated more than 100 years ago is on the horizon. But technology, as usual, evolves faster than the law
Only two countries in the world have regulations to account for damages arising from the circulation of self-driving cars.
The first one is the United Kingdom, which issued a law targeting cars “safely driving themselves in at least some circumstances or situations”. When the car is driving in such circumstances or situations, potential damages would be shouldered by the insurance company. This law, however, has yet to take effect.
More recently, Japan issued its own set of regulations to resolve the same issue. In their version, the responsibility would fall on the driver.
Italian lawmakers have yet to set out any rules governing such situations.
The Italian civil code provides for a joint responsibility of the driver and the owner of the car for damages deriving from the circulation of a vehicle. However, the concept of “vehicle” is still the traditional one: according to the traffic code, as recently amended, vehicles are machines “driven by humans”.
Let’s then check if by analogy other existing rules could apply.
Certain provisions of law govern the non-contractor responsibility that guides an entity with its own intellect. This principle of law has existed in the Italian legal system since Ancient Rome: the owners (dominus or domina) of slaves were responsible for any damage caused by their servants provided they had made no attempt to prevent it.
Currently, a provision of the Italian civil code traces its roots back to this legal tradition: damage caused by workers in the discharge of duties assigned to them by their company would oblige this latter to restore them. Fascinating as this idea may be, a perfect analogy between human intelligence and artificial intelligence is (still) inappropriate.
Likewise, not applicable by analogy are those rules that govern responsibility from dangerous activities or the more general principle of illicit damage in the Italian civil code, due to the presence therein of a specific provision regulating car circulation.
It will therefore be necessary to amend the Italian legal system in order to introduce a new kind of responsibility that mediates the risk allocation between the driver-(or owner-)user and the producer-provider of the car.
Unless, of course, the European Union promulgates a rule for all Member States. The argument has already been broached: the European Parliament urged the European Commission (the EU lawmaking body) to regulate on the “civil law rules of robotics”.
In a nutshell, the European Parliament has conducted an analysis of the context of damages that could be procured by the application of artificial intelligence and acknowledged the absence of existing EU rules that would correctly govern the implied conflict of interest.
The EU Parliament document suggests as a main solution to introduce a strict liability regime to restore damages caused by robots using AI, together with certain mandatory insurance policies. The document brilliantly suggests that liability should be spread proportionally between the owner-user and producer-provided on the basis of their share of the responsibility in “educating” the artificial intelligence.
Finally, the document also adds another fascinating and provocative option: to create a specific legal category for robots with AI (“electronic personality”) which would be responsible for damage they caused.
“KITT, you need a lawyer!”