AI: Making the UK Ready Willing and Able

May 14, 2018

Barely a day goes by without a piece in the media
on a new aspect of AI or robotics, including in a recent issue of Gulf Today. Some are pessimistic, others

Elon Musk, the Tesla and SpaceX boss, has
called AI more dangerous than nuclear weapons. The late Professor Stephen
Hawking has put the future rather dramatically: ‘the development of full
artificial future intelligence could spell the end of the human race’ and again
‘the rise of powerful AI could either be the best or the worst thing ever to
happen to humanity’.

Others, such as Dr Nathan Myhrvold – the former
CTO of Microsoft, have a more optimistic view about the future: the market will
solve everything.

The CEO of Google, Sundar Pichai, says AI is
more profound than electricity or fire.

This is the context for the Report of our
House of Lords AI Select Committee, AI
in the UK: ready, willing and able?
, which came after nine months of
inquiry, consideration of hundreds of written submissions of evidence, hours of
fascinating oral testimony, one session being trained to build our own neural
networks and a fair few lively meetings deciding amongst ourselves what to make
of it all.

In our conclusions we are certainly not of the
school of Elon Musk. On the other hand we are not blind optimists. We are fully
aware of the risks that the widespread use of AI could raise, but our evidence
led us to believe that these risks are avoidable, or can be mitigated to reduce
their impact.

But we need to recognize that understanding
the implications of AI here and now is important: Amazon’s Echo and Echo Dot,
Google Home and a variety of other devices, Siri on Apple devices for example,
are already in one in ten homes in the USA and UK. As a result of the Cambridge
Analytica saga, consumers and citizens are far more conscious of the uses to
which their data is put, both through AI and otherwise than just a few months

The Committee’s task was ‘to consider the
economic, ethical and social implications of advances in artificial
intelligence’. From the outset of the inquiry, we asked ourselves, and our
witnesses, five key questions:

How does AI affect people in their
everyday lives, and how is this likely to change?

  • What are the potential
    opportunities presented by artificial intelligence for the United Kingdom? How
    can these be realised?
  • What are the possible risks and
    implications of artificial intelligence? How can these be avoided?
  • How should the public be engaged
    with in a responsible manner about AI?
  • What are the ethical issues
    presented by the development and use of artificial intelligence’?

As the Report is 181 pages long with 74 recommendations,
you will be pleased to know that I will not be going into its detail but the Report
is intended to be practical and to build upon much of the excellent work being
done already in the UK.

Our recommendations revolve around five
central threads which run through the report.

The first is that the UK is an excellent place
to develop AI, and people here are willing to use the technology in their
businesses and personal lives. The question we asked was: how do we ensure that
we stay as one of the best places in the world to develop and use AI?

There is no silver bullet. But we have
identified a range of sensible steps that will keep the UK on the front foot.

These include making data more accessible to
smaller businesses, and asking the Government to establish a growth fund for
SMEs to scale up their businesses domestically and not worry about having to
find investment from overseas or prematurely sell to a tech major. The
Government needs to draw up a national policy framework, in lockstep with the
Industrial Strategy, to ensure the coordination and successful delivery of AI
policy in the UK. Their recent AI Sector deal is a good start but only a start.
Real ambition is needed.

A second thread relates to diversity and

  • in education and skills
  • in digital understanding
  • in job opportunities
  • in design of AI and algorithms
  • in the datasets used.

In particular, the prejudices of the past must
not be unwittingly built into automated systems. We say that the Government
should incentivise the development of new approaches to the auditing of
datasets used in AI and also encourage greater diversity in the training and
recruitment of AI specialists.

A third thread relates to equipping people for
the future. Many jobs will be enhanced by AI, many will disappear and many new,
as yet unknown, jobs will be created. Significant Government investment in
skills and training will be necessary to mitigate the negative effects of AI. Retraining
will become a lifelong necessity. At earlier stages of education, children need
to be adequately prepared for working with, and using, AI, data understanding
is crucial.

A fourth thread is that individuals need to be
able to have greater personal control over their data, and the way in which it
is used. We need to get the balance right between maximising the insights which
data can provide to improve services and ensuring that privacy is protected.

The ways in which data is gathered and
accessed need to change so that everyone can have fair and reasonable access to
data, while citizens and consumers can protect their privacy and personal

This means using established concepts, such as
open data, ethics advisory boards and data protection legislation, and
developing new frameworks and mechanisms, such as data portability hubs and
data trusts.

AI has the potential to be truly disruptive to
business and to the delivery of public services. For example, AI could
completely transform our healthcare both administratively and clinically if NHS
data is labelled, harnessed and curated in the right way. But it must be done
in a way which builds public confidence. As these new frameworks and mechanisms
become increasingly important, transparency in AI is needed. We recommended
that industry, through the new AI Council, should establish a voluntary
mechanism to inform consumers when AI is being used to make significant or sensitive

Of particular importance to the Committee was
the need to avoid data monopolies, particularly those of the tech majors.
Access to large quantities of data is one of the factors fueling the current AI
boom. We have heard considerable evidence that the ways in which data is
gathered and accessed needs to change, so that innovative companies, big and
small, as well as academia, have fair and reasonable access to data.

Large companies which have control over vast
quantities of data must be prevented from becoming overly powerful within this
landscape. In our report we call on the Government, with the Competition and
Markets Authority, to review proactively the use and potential monopolisation
of data by big technology companies operating in the UK. It is vital that SMEs
have access to datasets so they are free to develop AI.

The fifth and unifying thread is that an
ethical approach is fundamental to making the development and use of AI a
success for the UK. The UK contains leading AI companies, a dynamic academic
research culture, and a vigorous start-up ecosystem as well as a host of legal,
ethical, financial and linguistic strengths. We should make the most of this

A great deal of lip-service is being paid to
the ethical development of AI but the time has come for action and not just
paying lip-service to the idea. We have suggested five principles that could
form the basis of a cross-sector AI Code.

  • Artificial intelligence should be
    developed for the common good and benefit of humanity.
  • Artificial intelligence should
    operate on principles of intelligibility and fairness.
  • Artificial intelligence should not
    be used to diminish the data rights or privacy of individuals, families or
  • All citizens should have the right
    to be educated to enable them to flourish mentally, emotionally and
    economically alongside artificial intelligence.
  • The autonomous power to hurt,
    destroy or deceive human beings should never be vested in artificial

These suggestions are just to get the ball rolling.
Ethical AI principles are for discussion not just amongst academics and between
businesses or between governments. They must be agreed and shared widely, and must
work for everyone. Without this, an agreed ethical approach will never be given
a chance to get off the ground.

We did not suggest any new regulatory body for
AI, taking the view that ensuring that ethical behavior takes place should be
the role of existing regulators, whether the FCA, CMA, ICO or OFCOM. We also
believe that in the private sector there is a strong potential role for ethics
advisory boards.

AI is not without its risks and the adoption
of the principles proposed by the Committee will help to mitigate these. An
ethical approach will ensure the public trusts this technology and sees the
benefits of using it. It will also prepare them to challenge its misuse.

All this adds up to a package which we believe
will ensure that the UK could remain competitive in this space.

AI policy is in its infancy in the UK. The
Government has made a good start in policy making and our report is intended to
be collaborative in its spirit and help develop that policy to ensure it is
comprehensive and coordinated.

In our Report we asked whether the UK is
ready, willing and able to take advantage of AI. If our recommendations are
implemented, it will be.

The omens from the Government are good. What
we need from now onwards is to make sure that our recommendations are adopted.
Where you agree with them, we welcome support in taking them forwards with
industry, academia and the Government. For AI to continue to be a success, we
need to work together.

Lord Tim
is a Partner and Head of UK Government Affairs. He was made a
life peer in 1998. He is the Liberal Democrat spokesman for the Digital Economy
and a former spokesman on the Creative Industries and is Chair of the House of
Lords Select Committee on Artificial Intelligence (2017-). He is Co-Chairman of
the All-Party Parliamentary Group on Artificial Intelligence