Autonomous Weapons Systems: Is Regulatory Control Needed?

October 22, 2018

The
use of artificial intelligence in the defence sector raises strong emotions.
The risk of ‘killer robots’ running wild is a common Hollywood theme. However,
as AI becomes commonplace in the business environment, the defence sector is
following quickly, and the potential risks from use of AI in weapon systems are
becoming more real. There is a need for more widespread and informed debate on
the need for and extent of regulation on the use of AI in the defence sector.

House of Lords Select Committee on AI

The
issue was considered in some depth recently by the House of Lords Select
Committee on Artificial Intelligence (very ably chaired by Lord Clement-Jones –
who recently chaired a panel discussion on AI at the SCL Annual Conference 2018).
The Select Committee report warned that a ‘lack of semantic clarity could lead
the UK towards an ill-considered drift into increasingly autonomous weaponry’. The
Select Committee identified the MoD’s definition of AI weapons systems as being
a significant concern, as it establishes a considerably higher standard for AI
weapons as compared to the definition used by most countries.

The
most recent MoD definition of AI weapons systems is set out in the MoD’s
guidance on unmanned aircraft systems (September 2017) which provides that:  

An
autonomous system is capable of understanding higher-level intent and
direction. From this understanding and its perception of its environment, such
a system is able to take appropriate action to bring about a desired state. It
is capable of deciding a course of action, from a number of alternatives,
without depending on human oversight and control, although these may still be
present.

In
the hearings of the Select Committee, Professor Noel Sharkey commented that the
requirement in the MoD’s definition for an autonomous weapon system to be
capable of being ‘aware and show intention’ was to set the bar so high for
autonomy that it was effectively meaningless. This allows the MoD to avoid the
issue of whether or not it should be developing and potentially deploying AI
weapons systems. The MoD can claim on the basis of this very demanding definition
that ‘the UK does not possess fully autonomous weapon systems and has no
intention of developing them. Such systems are not yet in existence and are not
likely to be for many years, if at all’
.

However,
AI-based weapons systems are starting to be developed and deployed in practice.
During the Select Committee oral hearings, Mike Stone (ex- MoD Chief Digital
and Information Officer) commented that at the moment in the defence area AI is
‘most prevalent’ in the back office, rather than in the military arena but that
there is some AI in cyber defence and a limited amount of AI in defensive
ship-to-air missiles. In his view, ‘the genie is out of the bottle’ in the
sense that the usage of AI is becoming so widespread in non-military usage that
it is inevitable that it will cross-over into military usage, particularly as
much military innovation and investments ‘coming from the civilian world and
being brought into the military world’.

In
the same oral hearing, Professor Noel Sharkey commented that, in his view, the
major concerns around the use of AI in the defence areas relate primarily to
the use of AI in two areas: autonomy of systems in target selection and
autonomy in the application of violent force. More generally, and from the
perspective of international humanitarian law, there are concerns that the use
of AI in target selection and the application of violent force may not comply
with the principles of distinction, proportionality and military necessity or
precautions in attack. It is arguable that, if weapon systems can identify and
prioritise targets, some form of meaningful human control is required in order
for them to comply with international humanitarian law.

The
MoD is able to avoid having to deal with this issue on the basis of its overly
demanding definition of autonomous weapons systems. However, in its response to
the Select Committee report, the UK government has said that it has no
intention of changing its definition of autonomous weapons systems.

US Directive on AI Weapons

The
USA introduced a specific legal framework for AI weapons systems in November
2012. The USA issued Directive 3000.09 (updated in 2017) (see
http://www.esd.whs.mil/DD/), establishing policy for the ‘design,
development, acquisition, testing, fielding, and … application of lethal or
non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous
weapon systems
.’

It
was a first attempt at establishing policy prescriptions and demarcating lines
of responsibility for the creation and use of semi-autonomous, ‘human
supervised’ and fully autonomous weapons systems. In layman’s terms, it
attempts to answer the who, what, when, where and how of autonomous systems in
military combat.

The
Directive sets out reasonably clear lines of responsibility for system
development, testing and evaluation, equipment/weapons training, as well as
developing doctrine, tactics, techniques and procedures. The explicit purpose
of the Directive is to establish guidelines to ‘minimize the probability and
consequences of failures in autonomous and semi-autonomous weapons systems that
could lead to unintended engagements’. In other words, ‘the use of force
resulting in damage to persons or objects that human operators did not intend
to be the targets of US military operations’.

The
Directive raises as many questions as it answers. One of the key worries was
the extent to which the policy could be avoided in certain circumstances. Also,
the Directive may erode the concept of ‘proper authority’ for the use of
violent force and thus raise questions over compliance with international
humanitarian law.

UN Convention on Conventional Weapons

The
United Nations is actively considering the imposition of a ban on AI weapons
systems. A process has been set up to consider whether AI systems should be
restricted under the Convention on Certain Conventional Weapons, a disarmament
treaty that has regulated or banned several other types of weapons, including
incendiary weapons and blinding lasers.

Meetings
of the Group of Governmental Experts to review this issue have been held since
2014. At the most recent meeting in September 2018, 26 countries supported an
outright ban on fully autonomous weapons. A small number of countries,
including Australia, Israel, Russia, South Korea, and the United States, oppose
a new treaty, political declaration, or any other new measures dealing with the
introduction of AI weapons systems.

The
issues will continue to be debated at a meeting of the UN High Contracting
Parties in November 2018. The options currently ‘on the table’ relate to
proposals for:

  • a
    legally-binding instrument stipulating prohibitions and regulations on lethal
    autonomous weapons systems, potentially including a requirement for human
    control over the critical functions in lethal autonomous weapons systems;
  • a
    political declaration that would outline important principles such as the
    necessity of human control in the use of force and the importance of human
    accountability, and with elements of transparency and technology review;
  • further
    discussions on the human-machine interface and the application of existing
    international legal obligations, including legal weapons reviews required by
    the Geneva Conventions, and the identification of practical measures, best
    practices and information sharing; and
  • a
    view that, as international humanitarian law is fully applicable to potential
    lethal autonomous weapons systems, no further legal measures are needed.

These
options are not necessarily mutually exclusive. It is not at all clear which option
or options may be chosen – if any – given the power of the relatively small
number of countries that oppose positive action. Campaigning organisation (such
as the Arms Control Association) make the point that ‘the time for discussion
is over and that the dangers of deploying lethal autonomous weapons have been
sufficiently demonstrated to warrant the initiation of formal negotiations on
meaningful control mechanisms’.

Concluding Thoughts

Whilst
the risks of AI weapons systems merit the need for a serious assessment of
their use in armed conflict, a ban on AI weaponry aimed at preventing AI
weaponry getting into the ‘wrong hands’ is not likely to be effective. AI is
becoming pervasive. AI is different to most chemical weapons and many other
weapons of mass destruction which are complex, expensive and their manufacture
can be observed.

As
AI becomes more easily accessible, it is inevitable that AI solutions will be
linked to weapons systems to create weapons systems with AI capabilities.
Ironically, Elon Musk is one of the leading promoters of Open AI and he is also
actively campaigning for legislative controls on AI weapons systems.

Terrorists
and rogue states are probably already in the process of developing AI weaponry.
An autonomous car with a bomb in the boot may well be an autonomous weapons
system. Effective defensive capabilities are needed to counter AI weapons used
by the ‘bad guys’. There is a need to ensure that well-meaning constraints on
the use of AI weapons systems do not result in exposure to AI weaponry used by
terrorists and rogue states.

Roger Bickerstaff is a Partner at Bird
& Bird LLP and SCL Fellow.