Res Robotica! Liability and Driverless Vehicles

August 24, 2014

The blockbuster film I, Robot is set in 2035 and was inspired by science fiction writer Isaac Asimov’s works from the mid-20th Century. It portrays a world where robots work alongside and for humans. With striking perspicacity, Isaac Asimov considered it important to lay out a set of rules for the machines in his vision of the future. He introduced The Three Laws of Robotics (with a fourth the ‘zeroth law’ added later) in his 1942 short story Runaround.

0.        A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

1.        A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.        A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3.        A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Three Laws of Robotics

from Runaround by Isaac Asimov, 1942


The laws clearly assume a level of awareness of action that suggests the conscious machines sit somewhere between our traditional views of legal objects and persons. Professor Lilian Edwards and others have spoken extensively about the applicability of Asimov’s laws and as part of a joint EPSRC and AHRC Robotics Retreat in 2010 proposed some ‘principles for designers, builders and users of robots’[1] that are aimed at taking Asimov’s rules and developing them for real world applications as well as setting out some overarching messages designed to encourage responsibility within the robotics research and industrial community.

Ten years on from the I, Robot film (and 70 from the book), robots are becoming mainstream. Helen Greiner, the co-founder of iRobot (perhaps confusingly for this article, a NASDAQ listed robotics company) and CEO of CyPhyWorks, believes that unmanned vehicles could be the new must-have household accessory with potential to accompany children to the bus stop, collect and deliver forgotten homework and guard the house whilst you are out.[2] Honda’s Asimo[3] (a creation not too distant from the NS-5 in I, Robot) demonstrates how far we have come with humanoid robotics.

This article concentrates on the development of robotic driverless road vehicles and specifically the issue of liability on the road.

Driverless vehicles

Today, driverless vehicles are heavily restricted on public roads. This is set to change. The societal benefits of removing the human driver are clear – these include improved safety (by eliminating human error), reduced congestion, better mobility for elderly and disabled people and reducing energy consumption. The potential for the technology is fascinating and has been widely written about. The critical thinker Chunka Mui considers many of these issues in his Forbes article ‘Fasten Your Seatbelts: Google’s Driverless Car is worth trillions (parts 1 and 2)’.[4]

Google’s on-going driverless car project has attracted much media interest. The company has already confirmed its intention to produce a further 100 autonomous cars and there has been a re-drafting of regulations in California to allow autonomous vehicles on the public road.[5] According to the Department of Motor Vehicles in California, the rules regarding public use are currently being developed and it is expected that they could be adopted as early as January 2015.[6]

Governments in many other countries are reviewing road traffic laws to allow the safe integration of driverless cars. The Netherlands has announced a five-year plan to integrate autonomous vehicles by redrafting legislation, building infrastructure and funding research, with a particular emphasis on trucks.[7] Nevada and Michigan in the US, Gothenburg in Sweden and Japan are all pushing forward with plans to facilitate driverless vehicles.[8] The UK government has backed a project to create and launch the Lutz Pathfinder in Milton Keynes. This project will see the creation of driverless pods, which will transport the public through the streets of Milton Keynes, with 20 pods planned for launch in 2015 and 100 by 2017.[9]The UK’s Department for Transport and Department for Business, Innovation and Skills, working in partnership with the Technology Strategy Board, are to invest up to £10m in a competition for collaborative R&D projects to research further how driverless cars can be integrated into everyday life in the UK. As part of that initiative, the Secretary of State for Transport is conducting a review of the legislative and regulatory framework for testing driverless cars that is due by the end of December 2014. In early August 2014, the British government issued an open consultation into ‘Driverless cars: regulatory testing framework’ and released a discussion document and call for evidence. In the documentation, cars with advanced autonomous safety systems are divided into two broad categories:

·     ‘High automation’ – cars that are capable of operating on the road network without human intervention, but are fitted with a full set of driving controls, and in which a driver must be able and ready to assume control.

·     ‘Full automation’ – cars that are capable of operating on the road network without human intervention, and in which a driver need not be able and ready to assume control.

At this stage most resources are focused on addressing the immediate legislative and regulatory situation to allow testing and development of high automation driverless vehicles to progress from test tracks to the public roads.

Full automation liability – who to blame?

The thing about unmanned vehicles, including full automation driverless cars, is that they are tangible objects, quite like all the other products humans have made over time – except that they are capable of storing and using information, independently making decisions and physically acting upon them. This means that independently of their owners and operators it is possible for robots to cause actual physical damage in the real world. This raises an important legal question: who is liable if something goes wrong?

The question of who is liable for damage caused by a robot, specifically a driverless vehicle is not new. Several legal academics have wondered – could it be the owner or user, the designer or programmer, the manufacturer or even the technology itself?  With the prospect of autonomous robotic technology becoming more widespread, affordable and available, the risk that driverless vehicles will cause physical damage and injury to people and property is becoming more probable. Traditional legal methods of apportioning blame to legal persons such as via torts and product liability are arguably going to become more difficult.

What have the Romans done for us?

All right… all right… but apart from better sanitation and medicine and education and irrigation and public health and roads and a freshwater system and baths and public order… what have the Romans done for us?[10]

In ancient Rome, slaves were recognised by the law as a ‘human thing’,[11] essentially a special form of property. Slaves were owned; they had very few rights and had no legal personality as such. Slaves could not own property and anything they acquired or obtained belonged to their master by default. Despite this, a slave could be a considerable asset to its master. The most valuable slaves were used in their master’s businesses and could act on their master’s behalf to trade with other businessmen.  Because slaves had no legal personality, they could not be a party to a contract and any contract made would have been unenforceable. The idea of a peculium was introduced to enable business to be carried on through the slaves. The peculium was a pot of assets, owned by the master but used by the slave. These assets could comprise of money, property, animals and even other slaves. A key factor of the peculium was that it was not a static fund, it ebbed and flowed in accordance with the slave’s business and the size of a peculium was only determined when a claim was brought against it.[12] The peculium was used not only to allow slaves to trade on a master’s behalf but it also encouraged other business owners to trade through slaves as it gave them something enforceable to rely on.

Res Robotica and a Robot Peculium?

The authors’ phrase ‘res robotica‘ is a tongue-in-cheek nod to the prospect that law makers of the future may have to address the need for a new form of legal personality that sits somewhere between a legal object or chattel and a legal person, an ‘intelligent thing’. Objects do not have rights and obligations in law, but perhaps intelligent autonomous systems will. Maybe this is not such a radical concept – it is well established that legal personality can be attached to natural persons but also to legal entities such as companies that are treated similarly in law.

Italian academic Ugo Pagallo endorses the idea that the laws regulating robotics could be guided by the laws of Rome. In Pagallo’s The Laws of Robots,[13] he goes beyond the theory of agent (the robot – slave) and principal (the master – owner) and considers the use of ‘a digital peculium’. He presents the concept of a digital peculium with an insurance policy and how this could be used to cover any damage caused by the actions or omissions of a robot in the event of a claim against the robot and its owner.[14] Pagallo considers that the idea strikes a balance between the uncertain risk of technology and the protection of human interests by legal accountability.[15] As any injured party would make a claim against the peculium rather than the owner, manufacturer or designer, the peculium could therefore help mitigate the question of who is liable[16].

English lawyer Andrew Katz[17] suggests that robotic technology could be given an authenticated identity through the use of a trust scheme, one that is not mandatory, but failure to be party to it would render the owner of the technology strictly liable for its actions or omissions. As with Pagallo, Katz suggests that the peculium should be backed by an insurance policy.

Introducing the idea of a peculium linked to insurance could provide the legal flexibility required to accommodate the evolving technology[18] and tackling the problem of apportioning blame to a traditional legal person. Several questions remain, not least:

·     Who would determine the risks that a driverless vehicle could potentially pose and how?

·     How would an insurer calculate such an exponential risk when driverless vehicles are constantly evolving and learning?

·     How would one rate the vulnerability of driverless vehicles to be hacked, who would be liable in the event that the technology was hacked and how could this be proven?

·     Would an insurance company be able to exclude certain risks and what would happen if a ‘high risk’ driverless vehicle was uninsurable?

·     If the digital peculium became a mandatory requirement, what would happen in the case where a driverless vehicle was uninsured and caused damage? Would it be (as Katz suggests) a case of strict liability or could there be a scheme that allowed the victim to make a claim to a global agency, like the victim of a case against an uninsured driver where he or she could go to the Motor Insurance Bureau? If so, who would contribute to the global agency’s fund, the owner/user, the designer/programmer or even the manufacturer?

Many governments are reviewing domestic primary and secondary legislation as well as international laws and conventions covering road traffic, such as the 1968 Vienna Convention, in order to facilitate testing and development of driverless cars on public roads.  At some stage law makers and the insurance industry must consider the liability issues posed by intelligent autonomous systems. Perhaps the neo-Roman concepts of res robotica and a robot peculium have a part to play.

Peter Lee is a Senior Associate at Taylor Vinters LLP and a Technology Law Group committee member at the Society for Computers and Law.

Sabrina Richards is a Trainee Solicitor at Taylor Vinters LLP.


[2] Re-Imaginer of Robots: Helen Greiner at TEDxBoston



[5] use restricted to testing under the Autonomous Vehicle Testing Regulations as mandated by Senate Bill 1298 (Vehicle Code Section 38750).

[6] and




[10] Life of Brian, Monty Python

[11] Borkowski’s Text Book  on Roman Law, Paul du Plessis, Fourth Edition Oxford University Press page 92

[12] Borkowski’s Text Book  on Roman Law, Paul du Plessis, Fourth Edition Oxford University Press page 94

[13] The Law of Robots, Ugo Pagallo 2013

[14] The Law of Robots, Ugo Pagallo 2013 page 133

[15] The Law of Robots, Ugo Pagallo 2013, page 190

[16] The Law of Robots, Ugo Pagallo 2013, page 104-108 it is accepted that this idea subject to whoever contributes to the peculium and noted that it may be a way to escape liability

[17] Intelligent Agents and Internet Commerce in Ancient Rome, Andrew Katz, 2008

[18] The Law of Robots, Ugo Pagallo 2013