Artificial Intelligence: The Real Legal Issues

July 21, 2017

If you’re reading this, the chances
are that you will have come across the concept of Artificial Intelligence in
your prior researches. Like most issues ‘du
jour
’, a lot has been written on the topic which falls into two categories
– material either presupposes a level of prior computer- science based knowledge
or, more commonly, is thinly disguised salesware which doesn’t convey a lot.

This article hopefully will provide
the uninitiated (and semi-initiated) with a firm grounding on which to base a
practical assessment and understanding of the legal risks posed by the use of AI.

I have identified two categories of legal risk: the ‘Causation Challenge’
and the ‘Big Data Challenge’. But, before
we get into a discussion of these challenges, it is worth looking briefly at
what the current business motivators are for are pushing the boundaries of AI
and at the current technological developments, if only to gain a wider appreciation
of the real-world applications which are driving the use of AI.

I should say at the outset that within
AI it is machine learning – the
capacity for machines to take learn and take independent decisions – that is
creating serious conceptual difficulty for lawyers. At the heart of this
conceptual struggle is the knowledge that whilst we can teach machines to learn
and develop independent behaviours, on a scientific level, we are still at a
loss to understand how they do so, and this can lead to some very unpredictable
outcomes – see for example the Google Brain neural net, tasked with keeping its
communications private that completely independently developed its own
encryption algorithm.[1]

There are several real-world ‘machine
learning’ applications which are driving developments in the technology:

·       
Image processing and tagging

Image processing, as the phrase
suggests, requires algorithms to analyse images to get data or to perform
transformations. Examples of this include identification/image
tagging
– as used in applications such as Facebook to provide facial
recognition or to ascertain other data from a visual scan, such as the health
of an individual or location recognition for geodata; Optical Character Recognition – where algorithms learn to read handwritten
text and convert documents into digital versions

·       
3D Environment processing

3D environment processing is an
extension of the image processing and tagging skill – most obviously translated
into the skills required by an algorithm in a CAV or ‘Connected and Autonomous’
vehicle to understand its location and driving environment. This uses image
data but also potentially radar and laser data to understand 3d geometries. Typically
this technology could also be used in free roaming robot devices, including
pilotless drones.

·       
Text Analysis

These are processes which extract
information or apply a classification to items of text based data. This could
include social media postings, tweets or emails. The technology may then be
used to provide filtering (for SPAM); information extraction – for example to pull out particular pieces
of data such as names and addresses or sentiment
analysis
– to identify the mood of the person writing (as Facebook has
recently implemented in relation to postings which are potentially suicidal[2]). Text analysis is also at
the heart of Chatbot technology –
allowing for interaction on social media.

·       
Speech Analysis

Speech processing takes equivalent
skills to those used for textual documents and applies them to the spoken word.
It is this area which is seeing an incredible level of investment in the
creation of personal digital home assistants from the likes of Amazon (with its
Echo device), Microsoft with Cortana, Google’s Home device and Apple with Siri
(and now the recently launched ‘Homepod’).

·       
Data Mining

This is the process of discovering
patterns or extrapolating trends from data. Data mining algorithms are used for
such things as anomaly detection identifying for example
fraudulent entries or transactions; association
rules
– detecting supermarket purchasing habits by looking at a shopper’s
typical shopping basket; and predictions – predicting a variable from a set
of others to extrapolate (eg a credit score).

·       
Video game virtual environment
processing

Video games are a multi-billion
dollar entertainment industry but they are also key sandboxes for machines to
interact with and learn behaviours in relation to other elements in a virtual
environment, including interacting with the players themselves.[3]  

That’s a quick overview of the
practical applications. Let’s take a look at the legal challen

The
Causation Challenge

So what do I mean by the ‘Causation
challenge’ ?

I am referring to the way in which
traditional liability questions are settled. Basically this is through the
attribution of fault by application
of causation principles.

Fault drives compensation. Whether
it is tortious, contractual or – to a more limited degree, consumer protection
liability – it is this attribution which enables parties injured financially or
physically to seek redress and obtain compensation for such damage. Consumer
protection is obviously strict liability by its nature, but even here you need
to establish the existence of a defect.

As lawyers, we all understand that
fault attribution is driven by the mechanism of causation. If you can pinpoint
the cause then you can assign the blame. Whether it is establishing breach of a
duty of care in tort, breach of an express or implied term in a contract or
establishing a defect in consumer protection liability, in each case the fault or defect must have caused the
loss
.

The real issue with AI powered
devices is that, as increasingly the decisions that they take become more and
more removed from any direct programming and are in turn more based on machine
learning principles (as we have discussed above), it becomes harder to
attribute fault. 

Our existing liability frameworks
deal comfortably with traceable defects – machine decisions that are traceable
back to defective programming or incorrect operation. They begin to fail
however where defects are inexplicable or cannot be traced back to human error.

We’re now seeing
regulators thinking about and grappling with this problem.  

As I suggested in a 2016
ITechLaw conference paper,[4] and was subsequently advocated
by the European Parliament Committee on Legal Affairs in its Report on Civil Law Rules on Robotics in
January 2017, one of the ways to ‘fix’ this would be to introduce a strict
liability system backed by a licensing fund and a certification agency. This
would work by introducing an assessment system for a robotic or AI device which
would require the payment of a levy to release that device onto the open
market. I like to refer to these certification agencies as ‘Turing Registries’[5] after the great computer pioneer,
Alan Turing, although the European Parliament uses the rather more prosaic EU
Agency for Robotics and Artificial Intelligence.

The levy would go
into a fund which would enable the payout of compensation in the event a risk
transpired.

This system has some
historical precedent as a variety of it is already in force in New Zealand, in
the shape of the Accident
Compensation Act 1972
which statutorily settles all forms of personal
injury accidents (including RTAs) and has effectively abolished personal injury
litigation in that country. I personally prefer this as a solution as it is
essentially scalable – you can imagine that as machines become more and more
capable, they could attract higher licensing charges to fund a greater variety
of risks.

What has the UK been
doing to address this challenge? We’ve seen the most movement in the CAV space.

The UK Government
recently concluded its consultation document on driverless vehicle development
– including an assessment of the way in which such autonomous vehicles should
be covered by insurance. This was the snappily titled ‘Pathway to Driverless Cars: Proposals to support advanced driver
assistance systems and automated vehicle technologies’[6]

which ultimately led to the Vehicle Technology and Aviation Bill, presented by Chris Grayling during the
last parliament.

The calling of the 2017
general election has meant that this proposed legislative measure has
automatically failed – however the measure was been substantially resurrected
in the Autonomous and Electric Vehicles Bill, announced in the 2017 Queen’s
Speech.[7] At the time of writing, we
do not have the text of the new measure, so I refer to the predecessor bill
here, as it is clear that the UK government intends to preserve the position
adopted in the now defunct Vehicle Technology and Aviation Bill.

So what are the
legislative proposals? Rather than go down the strict liability route I
mentioned earlier, the government has chosen to address the issue of driverless
cars from the perspective of gaps in current insurance coverage caused by fully
autonomous driving.

This is an essentially
pragmatic response that will probably work in an environment where there is a
mix of driverless cars and human piloted ones – it also avoids systemic change
to the insurance industry. It does however completely sidestep the causation challenge.
Crucially, the proposed measure relies very heavily on the ability of insurers
to subrogate and therefore bring claims of their own against other third
parties, including manufacturers. This will of course be hugely problematic for
insurers if the relevant fault or defect cannot easily be traced.[8]

Section 2 of the Vehicle
Technology & Aviation Bill as drafted provided that ‘where…an accident is caused by an automated vehicle when driving
itself…the vehicle is insured at the time of the accident, and…an insured
person or any other person suffers damage as a result of the accident, the
insurer
is liable for that damage.

In essence the
principle enshrined in the bill was that if you are hit by an insured party’s
vehicle that is self-driving at the time, the insurer ‘at fault’ pays out. If
you have comprehensive cover then you will also be insured for your own
injuries. If the vehicle at fault is not covered by insurance then the Motor Insurers
Bureau will pay out in the usual way and seek to recover its losses from the
owner of the uninsured vehicle.  As noted
above, it is very likely that this approach will be translated into the new Autonomous
and Electric Vehicles Bill.

So, a quick walk
through current legislative proposals for AI-enabled devices – as represented
by the automotive industry. Unsurprisingly, and rather disappointingly, we are
looking at a pragmatic stopgap approach which is in effect ‘kicking the can
down the road’ rather than confronting the issue. Sooner or later the spectre
of causation will need to be confronted.

The
Big Data Challenge

The other
challenge facing users and adopters of AI is from within.

The Big Data
challenge as I have called it has two overlapping facets: the first of which is
the way in which the industry capitalises on the terabytes of smart data
generated or ‘streamed’ by ‘smart devices’ – and again driverless cars and the
transport industry are leading the way on this.

Secondly, the
availability of predictive analytics modelled by AI is transforming the way in
which businesses serve customers and has the potential to create serious issues
around privacy.

Whilst such
technologies are being used to lower costs and provide greater competition in a
number of industries, such as for example insurance, there is also a greater
commensurate risk for people to become disenfranchised or excluded – taking the
insurance or finance markets as an example – through the withdrawal of insurance
or finance products as a result.

a)     Smart
Streaming

‘Smart streaming’ of
data has already drawn the significant attention of regulators. The European
Commission has recently published its Strategy
on Co-Operative Intelligent Transport Systems or ‘CIT-S
’,
which sets
out its approach to developing a standardised intelligent transport
infrastructure allowing vehicles to communicate with each other, with
centralised traffic management systems and other highway users.  The potential for such data to be misused is
clearly troubling – for example not only could a CAV identify a journey
destination, it could also report back on driving habits and theoretically identify
any traffic offences.

In the context of
our discussion such data could obviously also have an impact on the manner in
which insurance is offered to the user of the vehicle when human piloted.

The policy adopted
by the EU Commission has been to identify such data as personal data and
therefore afford it the protection of the European data protection framework.

As the Commission
states in its report – ‘the protection of
personal data and privacy is a determining factor for the successful deployment
of co-operative, connected and automated vehicles. Users must have the
assurance that personal data are not a commodity, and that they can effectively
control how and for what purpose their data are being used
.’[9]

b)   
Predictive Analytics

In the context of the Big Data
challenge, we should not ignore the disruptive effect of AI-driven predictive
analytics either. The most immediate influence of this is best illustrated by
the systemic impact predictive analytics are having on the Insurance industry.

Quite apart from the potential jobs
impact in relation to claims handling and processing – (to take an example Fukoka
Mutual Life in Japan is laying off all of its claims handlers in favour of IBM
Watson
), the technology is transforming the way in which insurance
companies model risk and hence price premiums.

At its most simple level, insurers
model risk by way of a concept known as ‘pooling’. Insurers put large groups of
similar people together, using common risk modelling points, and their premiums
are used to fund a ‘common pool’.

In any given year, some will need
to be paid out and some will not.  As
long as the common pool remains liquid – the system continues to work.  Predictive analytics function by giving
insurers more detail about individuals and associated risks. This allows for
far more accurate risk pricing and removes the need to pool broad groups of
people together.

Obviously this gives rise to a
whole host of ethical questions about the availability and pricing of
insurance.

Car driving behaviours are one
obvious example which could lead to ‘risky’ behaviours driving up insurance
pricing and ‘safe’ behaviours lowering it. Even more serious is the impact of
advanced analytics on genetic data which might model susceptibility to genetic
disease and therefore impact the pricing and availability of life insurance
coverage. The state of the art is such now that this doesn’t even need genetic
material to be sampled.

So, for example, US startup Lapetus can analyse a ‘selfie’
using thousands of different facial data points to determine how quickly an
individual is ageing, their gender, body mass index and whether they smoke. Their
system predicts life expectancy of individuals more accurately than traditional
methods.

Postscript
– Some thoughts for transactional lawyers

This is all very well, but where
does this leave the transactional lawyer faced with the task of contracting for
an AI-based system?

It is probably the causation
challenge that requires more thought in relation to contractual liability, as
the challenges posed by the use of big data will be unchanged whether they are
processed by conventional or artificially intelligent systems. Indeed we are
all aware (or really should be) of the onset of the GDPR and the onset of
changes that it is likely to bring.

Causation issues remain
problematic. What will need to be analysed in any real-life situational context
is the propensity for an AI system to make decisions which will have liability
impacts. The need here will be for both parties to avoid the ‘rabbit hole’ of
claims for damages at large, which of course largely depend on causation and
proof of loss.

I would suggest in the ‘mission
critical’ applications, where an unpredicted decision is made by an
artificially intelligent machine, the need will be to sidestep causation and focus
on the loss itself. This will inevitably draw us down the path of indemnity
based recovery mechanisms.

My prediction – expect to see much
more of these in your contracts in the future, expressed on a much wider basis.

John Buyers is a partner at
Osborne Clarke LLP and leads the UK Commercial team.

© Osborne Clarke LLP, 2017



[2]
See for example Facebook artificial intelligence spots
suicidal users
, BBC News, 1 March 2017

[3]
See for example the July edition (E308) knowledge feature of Edge Magazine – ‘Machine Language’ which discusses new
startup SpiritAI – a business that
has developed an intelligent character engine for NPCs (non player characters)
in video games, thus obviating the need for thousands of pages of pre-scripted
dialogue. 

[4]Liability issues in Autonomous and
Semi-Autonomous Systems
’, John Buyers, available online at Osborne Clarke’s
Website (and for ITechLaw members on the ITechLaw website).  See http://bit.ly/2tQk78i

[5]
The term ‘Turing Registry’ was coined
by Curtis E.A. Karnow in his seminal paper, ‘Liability for Distributed Artificial Intelligences’, see Berkeley
Technology Law Journal 1996, Vol 11.1, page 147

[6]
See the government response to the consultation at http://bit.ly/2iLd23x

[8]
What seems to have been overlooked in the government’s analysis is the complete
unsuitability of current consumer protection law (as embodied in the Consumer
Protection Act 1982) to deal with liability issues caused by AI devices.  The first concern is that the law is designed
as a measure to protect consumers (ie real live persons, as opposed to legal
persons) from personal injury.  Its focus
is not on pecuniary loss and damage. 
Secondly, the definition of ‘Product
under the CPA is not clear on whether software and/or other products of an
intellectual type are included, and thirdly there is the so-called ‘Developmental
Risks Defence’ which provides a defence to the manufacturer ‘if the scientific and technical knowledge at
the time the product was manufactured was not such that a producer of a similar
product might have been able to discover the defect
’ (s 4(1)e) – clearly a
defence which will provide maximum wriggle room for the developers of AI
technology!  See my 2016 paper
(referenced earlier) for a more detailed discussion.

[9]
Para 3.3 of the Report (Privacy and data protection safeguards), p 8