When Algorithms Kill

August 23, 2016

The topic of autonomous weapon systems, often referred to in the media and by activists as ‘killer robots’, has been much debated in recent years. Contributions to the debate tend to play on the fact that it is generally accepted that such systems do not yet exist and ‘could be’ and ‘possibly’ comments, often interspaced with pictures of the Terminator and other robots from the movies, are common. Autonomous weapons systems are often confused with remotely-piloted systems – that is, drones – and this complicates the debate further. This article focuses on the legal and ethical issues arising from the development of autonomous weapons. It views autonomous weapons as a particularly important and concerning instantiation of algorithmic decision-making because it involves the algorithmic selection and engagement of a target (with lethal force) without human intervention.

There is no internationally accepted definition of an ‘autonomous weapons system’ at present, a situation that does not help in the discussions on the issues involved. I prefer the ‘umbrella definition’ that the term ‘autonomous weapon system’ will refer to a computerised system in which a lethal weapon can engage a target without requiring the input of a human operator at the time of engagement.

Most of the weapon systems that are loosely referred to as ‘autonomous’ are not autonomous according to a computer scientific understanding of that term. In those terms, it is more correct to refer to the type of weapon systems in use today as automated weapon systems but, unfortunately, use of the term ‘autonomous’ is almost ubiquitous. This umbrella definition leaves aside consideration as to whether a weapons system that is referred to as autonomous is actually automated. An automated system behaves in exactly the manner in which it was programmed to do. The functions of the system are governed by pre-set algorithms, while an autonomous weapons system on the other hand, in the computer-science understanding of the term, has artificial intelligence, the ability to learn and behave in a manner not pre-planned by the computer’s programmers.

Many states and most commentators agree that there are no fully autonomous weapon systems (with artificial intelligence) in operation at the time of writing. However, this position refers to the technical, computer science, definition of autonomous systems that have artificial intelligence. There are many automated systems in operation for several years that meet the terms of the broad, umbrella description of ‘a computerised system in which a lethal weapon can engage a target without requiring the input of a human operator at that time’, as mentioned above. Probably the most well-known automated weapon system already in use is the so-called Iron Dome weapon system that is deployed to protect many areas of Israel. The sensors in this system will detect ‘incoming’ missiles or aircraft that the algorithms in its computer system identify as a threat. Because of the limited time available and, possibly, multiple threats, which would not permit a human response, the Iron Dome’s computers allocate individual threats to its own weapons and destroy them before they can reach their intended destinations. A 2015 study by the US Centre for a New American Security reported that ‘at least 30 countries have defensive systems with human-supervised autonomous modes that are used to defend military bases and vehicles from short-warning attacks, where the time of engagement would be too short for a human to respond’ [1].

It is universally accepted that international humanitarian law, often referred to as the law of armed conflict, governs the use of all weapon systems, and specifies the principles to be applied when lethal force is being considered and used. The principle of distinction provides that civilians may never be the object of an attack and the International Court of Justice referred to this as a ‘cardinal principle’ of humanitarian law. [2] Civilians and civilian objects are protected from attack unless, and for such time as, they directly participate in hostilities. The decision whether a person is a combatant, liable to be targeted because of that status alone, a civilian, with protection from attack, or directly participating in hostilities, may be extremely complex and whether or not an autonomous weapon system could make such evaluations would be strongly disputed.

While it is always illegal to direct an attack against civilians, there will be cases where protected civilians are killed or injured in an attack on a military objective. Every attack must be analysed in advance in accordance with the principle of proportionality. This analysis must consider the expected (not the possible) civilian casualties and balance them against the overall military advantage that is anticipated to result from the attack. As with international humanitarian law generally, the proportionality analysis is an attempt to balance the military necessity for an attack and humanity – will the expected civilian casualties be justified by the overall military advantage that is likely to accrue? While there is already software in use in remotely-piloted vehicles (drones) that carries out what is known as ‘collateral damage estimates’, [3] a system’s ability to estimate and continuously evaluate military advantage would not appear to have been claimed to date.

Persons carrying out an armed attack are required to do everything ‘feasible’ to ensure that the attack is a military objective and take precautions, including the selection of the means and method of the attack and giving warnings (‘unless circumstances do not permit’), in order to avoid, or at least minimize, casualties among protected civilians. While there is no definitive legal requirement to use advanced, highly precise weapons in carrying out an attack, an autonomous system may have an array of various weapons available to carry out a precise armed attack. While an autonomous system would probably have the technical capability to carry out the mechanical tasks such as giving warning and cancelling/suspending an attack if necessary, the capacity to make continuous and ongoing judgements is also a critical requirement.

The question of responsibility for the actions of autonomous weapon systems gives rise to what some writers refer to as an ‘accountability gap’, asking the question: who can be held (legally and morally) responsible for the actions of a robot which, in effect, made its own decisions and acted in a manner which was not predictable or foreseeable?

Many commentators have concerns regarding the ethics of the use of lethal force by autonomous weapons, referring to the ‘transfer of life-and-death decisions to machines’. [4] The perceived lack of dignity in having a life or death decision made by a robot is summarised by Christof Heyns, Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, who considers that:

even if it is assumed that [autonomous weapon systems] … could comply with the requirements of [international humanitarian law], and it can be proven that on average and in the aggregate they will save lives, the question has to be asked whether it is not inherently wrong to let autonomous machines decide who and when to kill. [5]

Human Rights Watch considers that an autonomous weapon’s lack of empathy, with the inability to show compassion, deprive the robot of ‘a powerful check on the willingness to kill’. While the organisation does acknowledge that ‘robots are immune from emotional factors, such as fear and rage, that can cloud judgment, distract humans from their military missions, or lead to attacks on civilians’, it adds that ‘decisions over life and death in armed conflict may require compassion and intuition’.

The main forum where states and civil society have discussed the question of autonomous weapon systems is the UN Certain Conventional Weapons Convention where week-long ‘informal meetings of experts’ have taken place annually since 2014. Subject to approval by states at the organisation’s Review Conference in December 2016, it is expected that the discussion process will move to a more formal ‘Group of Government Experts’ in April 2017. However, the Certain Conventional Weapons Convention operates on the basis of consensus decisions and it is difficult to see even a majority position among the states parties to the convention at this time.

While the topic of autonomous weapons may not immediately come to mind in a discussion on algorithmic governance, it is an area that needs to be considered. The legal and ethical considerations, the possibility of a responsibility gap and the question of human dignity are amongst the many areas that concern other non-military areas of algorithmic governance. The extremes presented by lethal weapons, be they managed by algorithm or, in the future, by artificial intelligence, can help to focus the mind and contribute to the debate about algorithmics, governance and the law.

Peter Gallagher is a PhD candidate at the Irish Centre for Human Rights, National University of Ireland, Galway.



[1] Paul Scharre and Michael C. Horowitz, An Introduction to Autonomy in Weapon Systems, CNAS Working Paper, Feb 15 (Center for New American Security (CNAS) 2015) <http://www.cnas.org/sites/default/files/publications-pdf/Ethical%20Autonomy%20Working%20Paper_021015_v02.pdf> accessed 15 Feb 15, 3.

[2] ICJ, Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, 1996 ICJ 226, para 78.

[3] Michael N Schmitt, Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics’ [2012] Harvard National Security Journal Feature, 19.

[4] CCW, Report of the 2015 Meeting of Experts, CCW, April 15, para 33.

[5] General Assembly, ‘Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns, 09 Apr 13, A/HRC/23/47’ (UN 2013)