Darren Grayson Chng on a book that looks at the risk to governance posed by AI - and what can be done.
‘We, the Robots’ is a book on the difficulties that (narrow) AI systems pose to government and governance. It examines how existing legal tools can be adapted to the modern environment, and what more is needed.
The book is organised into three parts. Part I suggests that AI systems pose three challenges to regulation. First, high-speed computing – citing as an example the 2010 “Flash Crash” , the book says that the processing speed of AI systems may render some kinds of harm uncontainable, unstoppable, or undetectable.
The second challenge is the increasing autonomy of AI systems. Interestingly, the book takes the view that the standard to which autonomous vehicles will be measured could be pegged to whether human drivers or AVs (possibly level three, probably levels four and five?) that predominate on the roads.
The third challenge is the increasing opacity of AI systems, and how it is becoming increasingly difficult to understand or explain some machine learning techniques or the basis upon which they produced an output.
Part II of the book discusses the tools available to deal with the three challenges, and their limitations. Among other things it considers the applicability of the tort of negligence, strict liability, and product liability laws, and the role of insurance to manage risk, shape behaviour, and deter conduct.
“What more is needed” is discussed in Part III of the book. It talks about why, how, and when to regulate, and the institutional possibilities for regulation. The very last chapter considers whether and how AI systems themselves can support the regulation of AI. For example, regulatory objectives can be built into software (regulation by design), and AI systems can be built to report on their own compliance with rules and policies, including self-examination for bias.
I would recommend this book without hesitation to policymakers dealing with AI. The book is wonderfully rich in content. The author has been able to weave together issues of law, ethical principles, philosophy, and pieces of history, to present a coherent and intriguing point of view. What I liked in addition was that the book included in its discussion, China’s approach to AI regulation. Politics aside, the Chinese government has been drafting and issuing laws at great speed – faster than other countries – to regulate the development and use of emerging technologies. It is a pity that China’s approach is not commonly discussed in papers on AI regulation, because there may be lessons to learn from their approach and the market responses to their regulation.
One item on my wishlist, if there is ever to be a second edition of this book, is for more pages on the models used around the world to regulate AI. It could be useful to analyse what works and what does not, from the angle of whether a jurisdiction has comprehensive laws, whether it only has laws which address particular industry sectors, or if it has adopted a co-regulatory or self-regulatory model, and the trend globally.
Darren Grayson Chng
ABOUT THE BOOK
by Simon Chesterman