AI Challenges and the Law: Being smart enough to boss around smart devices and AI

September 17, 2018

Artificial intelligence has quietly invaded
our workplaces, homes and lives.

In many cases, we willingly invite the
agent into our homes and our lives. The agent can often now listen, see, feel
and report. Sometimes it can actuate other devices or services, either
autonomously or semi-autonomously. The advance guard that we invite into our
homes is smart TVs and smart speakers. Smart speakers such as Amazon Echo,
Google Home and Apple’s HomePod are now in more than 50 million US households. The
US market for smart speakers is estimated to be growing at around 50% per
annum. Many homes already have smart control and connectivity functionality. Smart
homes and smart offices will rapidly become common. Already video-surveillance of
offices and semi-public and public places, and surveillance by employers of
employee use of employer-funded or employer-supported internet access devices
and cloud services (including cloud services also used at home by an employee
for domestic purposes), is common.

Settings, Safeguards and Awareness

Rapid uptake of such new technologies
largely reflects the many benefits that smart devices bring to our lives. And
rapid uptake is not of itself a problem, if user awareness of the capabilities
and limitations of such devices keeps pace with deployment of devices and
changes in features and functionality of those devices. But the deployment and
use of new capabilities is increasingly opaque. One important change is that
the capabilities are often not controlled by the affected individual – or as
the GDPR more clearly expresses it, ‘the data subject’. Our current thinking as
to appropriate privacy settings and safeguards has focussed upon user-controlled
personal access devices such as smartphones and user-initiated activity in use
of social networking services, internet search and ‘acceptance’ of cookies and
online tracking identifiers behavioural advertising. Our regulatory response
has been to require higher transparency of what providers of services are
doing, to require more convenient privacy settings, to cajole online service
providers to improve consumer trust and to name and shame (and sometimes fine) service
providers that significantly transgress.

This approach may be sufficient in an
adult, allegedly user-controlled, world that is based upon a transparency and
contract view of privacy. Many citizens apparently take the view that you only
have yourself to blame if you don’t bother to set appropriate settings that are
made available to you that you could elect to set if only you bothered to find
and read the explanation of how to do so. But this is not a reasonable view of
our new ‘smart’ world.

Our smart world will soon be dominated by devices
where settings are determined by others, where settings are not as readily seen
or understood, where vulnerabilities in data security are common, and where
some users don’t know or forget that the device is there and in use, while
other users become overly dependent upon those devices operating reliably in
conditions for which the device is not designed, or conditions in which the
device is simply not consistently reliable. As these devices are increasingly given
important responsibilities for control and actuation – to autonomously cause
other devices to be activated or deactivated in certain conditions – the risks
of over-reliance, or of being lulled to sleep as to shortcomings or
limitations, or being commanded by malicious actors, quietly increase.

Because we don’t see these semi or fully
autonomous agents as robots-that-look-like-robots, we don’t interrogate these
capabilities or apply the same level of scrutiny or oversight as we are now
doing for more distant AI applications such as robots-that-look-like-robots or
self-driving cars. So we recognise these risks, to be addressed with some haste
as we hurtle towards a more manifestly AI world, but still future risks. This
is a mistake.

Jekyll
and Hyde

We don’t need to stoke fears of the
unfamiliar: there are plenty of ill-informed commentators doing that already. But
we do need to ensure that we don’t become too familiar with the phalanx of AI
invaders before we size them up. We are entitled to enquire of their makers,
trainers and commanders as to why we should consider that their troops will reliably
behave like responsible guests should we elect to invite them into our homes
and workplaces. Our expectations of the behaviour of these guests are then
informed by that conversation. We might then reasonably expect that our invitee
Dr Henry Jekyll should not transmogrify into evil Edward Hyde through remote
software upgrade or changes to service features and interconnectivity. Our family,
housemates, other invitees or tenants should be able to reasonably expect that
if they now cohabitate with Mr Hyde, they have been informed both that Mr Hyde is
present and that he is Mr Hyde, not Dr Jekyll. If a service provider by remote
software upgrade or change to service features or connectivity could
transmogrify Dr Jekyll, we should of course know this – regardless of whether
there is technically any handling of personal information involved. And if there
is a substantial risk that Dr Jekyll could become Mr Hyde if we fallible
consumers are careless but not manifestly stupid and allow Dr Jekyll to
communicate with some nasty gang of irresponsible service providers, a service
provider should tell us that. We can’t expect to a service provider to address
all of our manifest shortcomings and stupidities: the economy would stop if
this was the law. However, it is not good enough to say that we are on our own
and that we need to be especially intelligent in order to understand and
control our smart invitees.

Allocating Responsibility

So how do we translate such homespun
thinking into a concrete path for development of data and consumer law and
regulation? We need to consider the appropriate, ethical and socially
responsible limits to creation and deployment of autonomous capabilities in
smart devices, and how to give effect to these limits in law and limitations as
to freedom to contract out of those limits. In particular, we need to consider whether
it is reasonable and fair for a supplier to shift responsibility to a buyer to
determine whether and when to inform others about deployment and the use of a
device and of the device’s capabilities and limitations. We now expect that
user privacy and security should be by design and default. However, we don’t
yet expect that a product or service supplier should build accountability by
design into their offering or should ensure that there is adequate transparency
about who is told and who will know what a device is doing and how to control
what the device does.

Nor does our law yet expect that there will
be reasonable clarity as to who is responsible for what.

And the suppler often faces a dilemma: in a
post-iPhone world consumers expect sleek and simple user interfaces and single
page graphics driven deployment instructions. The required booklet of mandatory
electrical warnings and warranty limitations is often consigned straight to the
paper bin. More fulsome disclosures and instructions might well suffer a
similar fate. But we do need to create a culture of better disclosure by
suppliers, including as to their expectations of the level of responsibility to
be exercised by consumers in relation to deployment and use of these devices. In
short, we should apply to providers of smart devices and smart services the same
expectations that, post Cambridge Analytica and GDPR, we are now seeing develop
about service providers informing and empowering users of online search and
social networking services.

Increasing
Complexity

It will quickly become more complex. Autonomous
robots that can harm humans rightly inspire fear and calls for new regulation. A
driverless vehicle is not much different to the robots envisaged for regulation
by Isaac Asimov’s laws of robotics, but with the added complexity that at some
time robot cars will face the trolley-car dilemma. When faced with a decision
when taking each available choice will cause harm to humans, how do you assess
the magnitude of harm of each choice, so as to program the robot to take that
choice?

Semi-autonomous agents also raise issues of
complicity and moral culpability. It was recently reported that Google
executives announced to company staff that Google won’t renew its contract to
work on Project Maven, the controversial Pentagon program designed to provide
the military with artificial intelligence technology used to help drone
operators identify images on the battlefield. In one sense, even this ethical
question is easy. Designing a weapon may be less morally culpable than
operating a drone to fulfil its killing mission: the designer might reasonably expect
that a drone that could be weaponised will only be armed and deployed to take
out a properly assessed and appropriate target in a ‘just war’. But at what
point should a designer of outputs designed to be instruments of war consider
that the risk of morally reprehensible uses outweighs the benefits of use of
those outputs on morally just missions? And who on the design team can be
expected to make these challenging assessments?

Immediate
More Mundane Ethical Challenges

These questions are rightly attracting much
attention from ethicists and lawyers. But as we have already noted, simpler
forms of AI already in use create more immediate issues. And most AI in
deployment today is used to aid humans to automate mundane or routine tasks and
decisions, to identify anomalies or unusual cases that requiring active human
review, and then present filtered information that aids a human decision. On
first glance, this does not look not ethically or legally challenging. A human
still calls the shots, and the AI does the easy (computationally driven
inferential) stuff and gives the hard (subjective) stuff to the human. And the
decision maker in the business decides whether to trust the AI to make the
decision, or to call the question out for a human decision. So what’s the
problem?

Misplaced
Faith

The first problem is that the decision-maker’s
faith in the algorithms driving the AI may well be misplaced.

Humans are fallible and biased, but managers
and other decision makers are improving their understanding of likely human
misperceptions and bad heuristics. The pioneering work of Daniel Kahneman and
Amos Tvesky in behavioural psychology over 20 years ago has now permeated many
disciplines, including management theory. We now have a reasonable handle on
how good humans make bad decisions. However, we have only just started down the
journey of building understanding of managers about how to manage AI. Many
businesses and government agencies are not yet familiar with evaluation of data
analytics products, or with managing data scientists. Given the continuing
acute shortage of experienced data scientists, this skills deficit is likely to
remain a problem for years to come.

As with many shiny new products oversold by
vendors to excited buyers, AI buyers may not be well qualified to assess the
shortcomings of an AI solution. Many boards of directors and CEOs are rushing
their businesses into AI without properly understanding its current
limitations. Over-reliance upon early AI is a likely outcome.

Opacity

Another problem is opacity of many AI
applications. Unless transparency is engineered into machine learning, the
algorithms may not be properly understood by decision-makers, or unable to be
cross-examined when things go wrong. The algorithms driving the AI may be
biased or wrong. The test data used to generate the algorithms may be too
narrow or not deep enough, so the algorithm is great with decisions at the
centre of the bell curve, but unreliable over a broad range of data sets
presented for decisions. The use of AI may be quite different to the
anticipated use environment for which it was developed. The algorithm may
entrench historical outcomes, rather than facilitate better outcomes. 

Many evaluations and deployments of AI do
not ask the appropriate questions. The AI may have been properly specified by
the supplier, but then let loose for use in a way that is inappropriate for the
particular application. And today many applications of AI escape careful review
as to fairness of outcomes, because ‘fairness review’ is not required as a
matter of standard business practice.

Contrast the GDPR requirement for automated
decision-making: for all such decision-making, except that expressly based on a
law, the data subject must be at informed of the logic involved in the decision-making
process, their right to human intervention, the potential consequences of the
processing and their right to contest the decision reached. Yes, this
requirement will burden EU businesses and government agencies, and yes, the
drafting is lousy and legal uncertainty will lead to lots of issues, but its
prospective operation will also control bad actors and help nurture trust of
citizens and consumers, thereby increasing social licence for good applications
of AI.

Legal Catch-up?

The law and lawyers are struggling to catch
up. Among many legal issues raised by AI deployments, two fundamental issues
are not yet well understood.  

First, product liability laws in many
jurisdictions impose responsibilities on both suppliers and business users of
AI products. A provider of services to consumers is liable for services
provided without due case and skill and for services made available for a
reasonably expected purpose where those services are not fit for that purpose. A
provider of products is also responsible for products which have a safety
defect. Unless the underlying reasoning of the AI is sufficiently transparent
and capable of being proven in court, a defendant AI user may have liability
exposure to a consumer plaintiff that cannot be sheeted home to an upstream
supplier of faulty AI.

Second, relevant tort law, and many
statutes, are not well equipped to deal with counter-factuals. The relevant
legal question is usually not whether an AI application performs statistically
better than humans. Rather, the question is whether for a particular AI
decision in particular circumstances that a plaintiff has put before the court,
the AI user was reasonable in relying upon the AI. Sometimes that may lead to a
counter-factual analysis of whether a human would have done better, but in many
cases we can’t be sure that this approach will be accepted in the courts.

Conclusion

AI is unstoppable. Law and ethics will need
to adapt to accommodate good AI. We may expect plenty of issues arising from
bad AI decisions unless businesses and government agencies move ahead of the
law to carefully evaluate AI before applying it – and they must then also
ensure that AI is used fairly and responsibly.

Peter Leonard of Data Synergies is a
Sydney-based lawyer and business consultant to data driven businesses and
government agencies. Peter chairs the Australian Computer Society’s AI Ethics
Technical Committee and the Law Society of New South Wales Privacy and Data Law
Committee.