Autonomous Vehicles: An Ethical and Legal Approach

June 20, 2017


Further to the consultation paper issued by the Department
of Transport ‘Pathway to Driverless Cars: Proposals to Support Advanced Driver
Assistance Systems and Automated Vehicle Technologies’ consider the legal
implications of autonomous vehicles, and associated ethical issues. 


In 1939 General Motors sponsored the Futurama Exhibit in New
York which featured automated highways. In 1942, Isaac Asimov, introduced ‘Three
Laws of Robotics’ in his short story ‘Runaround’ – outlining the ethical basis
for robot-made decisions, which centred around never harming humans.

Self-driving cars are no longer a thing of futuristic
exhibitions and writings. Governments are supporting this new development
believing that autonomous vehicles (AVs) will make ‘road transport safer,
smoother, and smarter’.[1] Amidst the rush
to get AVs on the roads, it appears that the technological hype has surpassed
crucial considerations regarding the societal impacts of these cars, and the
fundamental decisions the algorithms will have to make. Uber CEO Travis
Kalanick believes that ‘the world is going to go self-driving and autonomous’.[2] If this is the
case, then we must start discussing the philosophy behind the programming and
assess all possible risks. It is acceptable, not fantastical, to define AVs as
robots, since they are ‘machine[s] capable of carrying out a complex series of
actions automatically.’[3]

This essay will argue that like Asimov, we need to write
rules to govern our robots. Ultimately this essay concludes that until such
rules are written, we must act akin to the legal concept of the precautionary

Part 1: The issue of embodying human sensibility into

In some capacity, AVs are already on the road. It is
possible to purchase a car which parks itself or maintains a desired speed
limit. Soon, it will be possible to purchase a car which not only manoeuvres
autonomously but makes moral decisions too. Whilst we have succeeded so far in
creating highly able robots which mirror human capability, the act of embodying
human sensibility within a machine is far more complex. As they have the
ability to react much faster than a human brain, it is imperative that we
programme AVs with responsible algorithms with which to make such decisions.
AVs will learn ‘how to react rather than being explicitly told what to do’,[4] and will face
morally challenging situations in which to make critical evaluations,
optimising crashes for the best possible result. This presents a double-edged
sword conundrum, in that AVs will ‘teach themselves to save lives – but also
[to] take them’.[5] It is possible
that we will be asking these learning machines to make valuations on human
lives. In the event of an inevitable crash, would the AV swerve to avoid
hitting five children, with the knowledge that this will result in the death of
only one?

Calculations of societal impact are made frequently and,
along with cost-benefit analyses, form the basis for the many decisions which
have constructed the world we live in today. Utilitarians offer a similar
cost-benefit analysis in the search for morality and believe that actions are morally
right in so far as they bring about those consequences in which there is the
most possible happiness, or the least possible unhappiness. Jeremy Bentham, the
founder of utilitarianism coins ‘the principle of utility … [which] approves or
disapproves of every action whatsoever, according to the tendency which it
appears to … promote or oppose happiness’.[6] A similar
cost-benefit analysis is made between the social utility of cars weighed
against their dangers. As a result, there have been continuous developments in
safety mechanisms, which generally increase the levels of safety, but are also
susceptible to causing injury or even death. Seatbelts and air-bags have been
implemented to save lives in the event of an accident, however they have at
times caused death. Notwithstanding that, the risks are justified in that
ultimately the benefits are outweighed by the negatives.

It seems likely that the current preference for AV
programming will be based upon utilitarian reasoning.[7]
The appeal of utilitarian reasoning is that it is ultimately the foundation of
democracy – that decisions are made by the wishes of the majority. Whilst
widely applied, utilitarianism is fundamentally flawed and I will argue that it
should not be the basis of AV programming. I shall proceed with some
hypothetical situations analysing how AVs might achieve crash optimisation.
Whilst I discredit the use of utilitarianism for programming purposes, I do not
discredit a utilitarian cost-benefit analysis of the implementation of AVs altogether.
I will assess utilitarianism on its own terms by discussing the net-benefit of
AVs contrasted with the possible negative social implications, all the while
underlining the importance of jurisprudential dialogue in this particular

Part 2: Scenarios

Scenario A) A self-driving car, turns a corner where five
pedestrians are crossing the road away from a designated crossing. On the
pavement is one pedestrian. The AV does not have time to break safely. It can
either continue on its path and hit the five or swerve – hitting the one and
saving the five.

Since it is inevitable that one of the groups will be hit,
in a Benthamite endeavour to maximise total happiness, it would be preferable
to change course and save the five pedestrians. In its simple form,
utilitarianism provides a superficially adequate response for a naïve reader to
a rudimentary scenario. If we increase the complexity of the scenario, the flaws
begin to appear.

Scenario B) An AV is driving down a one way road in which
two motorcyclists are headed mistakenly in the opposite direction. The AV is
driving too fast to stop safely and the road is too narrow to avoid both of the
motorcyclists. Only one of the motorcyclists is wearing a helmet.

A utilitarian would most likely assert that, to minimise
pain, the AV should take action and hit the motorcyclist wearing a helmet,
since the motorcyclist without one would suffer worse injuries. However, this
response punishes a citizen for being safe. It is in precisely this way that ‘utilitarianism
fails to take seriously our ordinary conceptions of justice because it claims
that in some cases it is right to punish the innocent’.[8]
Moreover, the cold ‘numbers-game’ calculation of a utilitarian simply measures
one element which is happiness, and this is rather short-sighted, as this essay
will now demonstrate.

Scenario C) An AV is driving down a road when a child steps
out into it. There is no time to break safely. On the pavement is an elderly man.
The AV can either continue on its trajectory and hit the child, or change paths
and hit the elderly person.

Scenario C is not able to be easily assessed by evaluating
the quantity of happiness. Should we value a young life more than an elderly
life? One could argue the child has more time to live and potentially more
family members alive to miss them. On the reverse, the child is at fault for
not looking before stepping into the road, and the elderly man, who may have a
wealth of knowledge and a vast family of his own and is innocent (having complied
with the rules), is punished. Moreover, it would be practically impossible for
a car to make these calculations without having an abundance of personal data
stored in its system. In changing trajectory the car has actively participated
in the death of a human, and this does not sit comfortably with a common sense
of justice.

Alternately, the Doctrine of Double Effect (DDE),
distinguishes between intentional and unintentional foreseeable harm by
proposing ‘that it is sometimes ethically permissible to unintentionally cause
foreseeable harms that would not be permissible under the same circumstances to
intentionally cause’.[9] Thus, it would
be preferable to hit the young child who had come into the road, as this is
foreseeable yet unintentional, whereas to change trajectory and hit the elderly
man is intentional and foreseeable. This ethical distinction between
intentional and unintentional harm is where the doctrine may fall short when
applied to AVs, which, if programmed, will always be pre-determined and

However, whilst it is pre-determined, it is not
intentionally personal as the car is acting without bias toward a
non-predetermined individual. The programming would be purely based on the ‘wrong-place,
wrong time’ idiom and, in this regard, seems a better alternative to a change
in trajectory. Additionally, it acts as a preventative measure to warn
pedestrians of road dangers.

To complicate matters further, consider scenarios in which
decision-makers have subjective preferences.

Scenario D) There is a burning house with two people inside.
One is your mother, who is otherwise an ordinary woman; the other is a famous
brain surgeon.

Adopting a utilitarian view it seems that you should rescue
the surgeon as this would bring greatest happiness to the world overall. Yet
you would surely not sacrifice your own mother.[10]

Utilitarianism fails to represent one’s subjectivities, and
would perversely appear to demand irrational acts. This contradicts Bentham’s
desire for ‘articulating rational principles’,[11]
and thus, the theory fails by its own reasoning.

John Rawls criticises utilitarianism’s lack of justice, and
presents an alternative method of decision-making. One of his main criticisms
of utilitarianism was that ‘utilitarianism does not take seriously the
distinction between persons,’[12] in that it
fails to recognise an individual’s moral preference.

Rawls offers a hypothetical ‘original position’ from which a
citizen has to assess justice from a position of total ignorance in relation to
any particular character aspect. Rawls states that, in this uncertain original
position, a rational citizen would make just rules for a general society, in
the hopes that they will be protected with at least a minimum standard of
protection. Rawls ‘seeks to establish … what moral rational agents would agree
under ideal circumstances.’[13] However, when
applied to crash optimisation, Rawls’ theory is unhelpful. It is ultimately
impossible to ask citizens to put themselves in the original position. Crash
optimisation is a very real problem, and if faced with a dilemma such as
scenario C, Rawls’ theory offers no clear way to make such a decision other
than leaving individuals to decide using their own values of different
subjectivities, which is arguably worse than a cold utilitarian numbers game.

Part 3: Legal Implications

There needs to be careful consideration and a consensus as
to the foundations of AV regulation. Whilst the development in technology is
undeniably exciting we are potentially facing a complete remake of
transportation infrastructure and it is necessary that we discuss a framework
with which to control and regulate it.


The idea that robots can make decisions autonomously begs
the question of exactly how intelligent artificial intelligence can become
before we start to lose control over it. Google, hiring Geoff Hinton, claim to
be ‘on the brink of developing algorithms with the capacity for logic, natural
conversation and even flirtation.’[14] Seemingly, the
line between human and robot is drawing incrementally closer since, ‘artificial
intelligence (AI) computer programs … embody “almost minds”.’[15] It will be
interesting to see how the law of ‘diminished responsibility, in which agents
are considered as being not fully responsible for their own actions,’[16] develops as the
intricacy of robot sensibility progresses.


A newly visible legal issue regards liability. The law of
negligence will see some interesting challenges concerning failure to warn.
What will happen when an AV malfunctions and causes damage to property or
persons? Who will be liable: the individual, the designer, the car manufacturer,
or a combination? Volvo has declared that it would take full responsibility if
there were to be accidents involving their driverless cars; and Google have
made similar claims.[17] Other
manufacturers, such as Tesla, are aiming for a more individualised liability
structure.[18] Therefore it
seems that, by purchasing the car, an individual may have to accept any
subsequent legal ramifications. The concern here lies in whether individuals
would be willing to accept liability on behalf of a machine that makes
independent decisions. Moreover, these are machines that will be able to travel
across different borders, and thus the question arises of whether the AV would
change programming decisions dependent on jurisdiction.

Computer Misuse

If a situation arose, such as discussed when analysing Scenario
C, where AVs were supplied with personal data so as to make judgements, it
would be important for the machines to be able to comply with current data
protection and privacy laws. More concerning is the AV’s susceptibility to
hacking. The Evening Standard recently featured an article with the title
‘Driverless cars to detect bombs and alert police.’[19]
Whilst a built in anti-terrorist mechanism is arguably a solution to some of
the most distressing societal problems today, we must ask what will prevent
those outside the police force from accessing the cars and their information


For certain is my distaste for a utilitarian programming,
where a car actively causes the death of a human. Asimov’s first rule of
robotics stipulates that – ‘a robot may not injure a human being or, through
inaction, allow a human being to come to harm’, and is partly unrealistic,
although, of course, a fictional device. Whilst it is clear that no robot
should be designed to hurt a human, crashes are inevitable, and thus humans may
come into harm. Moreover, it is unjust to value one (or five) lives over
another and, in the event of an inevitable crash, it would be best to accept
the Doctrine of Double Effect. Something that seems natural to take from
utilitarian reasoning is that, if the implementation of AVs is successful,
crash optimisation will be less of an issue, due to increased safety. However,
this cannot be used as a superficial justification. There is a clear social
advantage to reap. However, as has been mentioned before, social utility must
be put into a holistic perspective, and while AVs could provide access for the
disabled, greener cities and safer streets, this must be balanced against the
probability of violations by hackers, inbuilt biases and techno-moral concerns.

The Government consultation paper wishes to begin ‘removing
barriers to the introduction of [AVs] where we can foresee them’ (at para 1.5).
However, I am sceptical of the need for the haste with which the introduction
of AVs is being dealt with. In this essay I have raised many concerns that need
to be considered further in order to reap the benefits of these machines. I
have highlighted the importance of jurisprudence in relation to technology.
Moreover, as technology becomes increasingly able, it should be an imperative
that an appropriate framework grows simultaneously able to facilitate
technology’s proper and responsible use.

It has become apparent that we must advance with caution. A
European Union principle is ‘if there is the possibility that a given policy or
action might cause harm to the public … and if there is still no scientific
consensus…, the…action in question should not be pursued.’[20]

This principle is particularly important to consider due to
the rapid development of artificial intelligence and the necessary unknowns
that follow with any rapid development. Acclaimed theoretical physician,
Stephen Hawking has himself expressed concern that ‘the development of full
artificial intelligence could spell the end of the human race,’[21] and thus, it is
imperative that we observe the precautionary principle, whilst considering the
convergence of the societal, economical and, most importantly, jurisprudential
discourses which are intrinsic to the development of AVs.

Lottie Michael is about to complete the final
(4th) year of her Law with European Legal Systems degree at the
University of East Anglia

From Czech, from robota ‘forced labour’. The
term was coined in K. Capek’s play R.U.R ‘Rossum’s Universal Robots’ (1920),
> accessed 06 May 2017  

[4] Jessica S. Brodsky, ‘Autonomous Vehicle Regulation:
How an Uncertain Legal Landscape May Hit The Brakes On Self-Driving Cars,
Berkley Technology Law Journal, [Vol. 31:AR], 2017, 863

[6] (2017). Jeremy Bentham, ‘An Introduction
to the Principles of Morals and Legislation’, Chapter I | Library of Economics
and Liberty, para II

[7] Hooton, C. (2017). Self-driving
cars may have to be programmed to kill you
.’ The Independent.

[8] Christopher Hamilton ‘Understanding Philosophy for
AS-Level’, Nelson Thornes Ltd, 79

[9] Edward C Lyons, ‘Balancing Act: Intending Good and
Foreseeing Harm – The Principle of Double Effect in The Law of Negligence’, The
Georgetown Journal of Law and Public Policy, Vol. 3:453

Hamilton, ibid n8, 77

[11] Internet
Encyclopaedia of Philosophy, <>, accessed
08 May 2017, para 4 

[12] John
Rawls, ‘A Theory of Justice’, Harvard University Press, 1971, 27 

[13] Raymond Wacks ‘Understanding Jurisprudence’ Oxford
University Press, 2nd edition, 301

[15] Donna Haraway, ‘Primate Visions, Gender, Race, and
Nature in the World of Modern Science’ Routledge 1989, 376

[16] Asaro, P. 2007 ‘Robots
and Responsibility from a Legal Perspective
‘ 1st ed. [ebook]

[19] Mark Blunden, ‘Driverless cars to detect bombs and
alert police’, Evening Standard, April 20 2017

(2017). Glossary of summaries – EUR-Lex.
accessed 7 May 2017