Leah Grolman reviews the first edition of this book bringing together disparate areas of law as they apply to AI.
The Law of Artificial Intelligence is a valuable resource. The text is currently the fastest way to identify and explore the legal and ethical issues for those who want to develop or start using AI in their business, those who have been aggrieved in some way by AI technology or indeed those who are defending a claim by someone who has been so aggrieved. The authors explain the law relevant to each chapter – and give a detailed but clear explanation of the most common forms of AI today – so the book is as useful for those with subject matter expertise as for novices, and for those pursuing a particular question as to those wanting a holistic understanding of various areas of law and regulation as they apply to AI.
Throughout the book, the authors come back to what creators and consumers of AI should be thinking about in practice. There is a section on what parties should anticipate when negotiating contracts for the supply of AI, which includes suggestions about how a table might be used to set out the agreed division of responsibility. There is a list of the multitude of regulators in the UK of which creators and consumers of AI should be aware. There is also a list of key factual matters that are likely to bear on whether an AI service provider has met a contractual duty to exercise reasonable care and skill. This is just to scratch the surface of the practical guidance the authors offer in this text.
It is important to clarify that this is not about “the law of artificial intelligence”. Nor does it profess to be other than by its title, which was probably chosen for its elegance and in anticipation of there one day being a ‘law of AI’. As Patricia Shaw notes in her chapter on “The law, ethics and AI”, “At the point of writing this text, there is no specific AI regulation in the UK … There is no ‘law of AI’ as AI cannot be treated as a sector in and of itself.” (at [3-002]). The book is about how various areas of law (as they currently stand) apply to AI (as it currently stands).
Being focused on what the law is rather than what the law ought to be, the book is most useful to lawyers in private practice, at the Bar, in the judiciary or in-house at businesses that create and/or consume AI. I should add that the book also covers the use of AI in legal services (e.g. disclosure in litigation, project management) and in the justice system more broadly, which will be of particular interest to this audience. That said, I would recommend the text to scholars, regulators and legislators, too: arguing about what the law should be without properly understanding how existing laws apply is to put the cart before the horse. While the book is primarily about law and regulation in the UK, it contains a helpful chapter that surveys how the EU, France, the US, Japan, China and Singapore are regulating AI and the main global initiatives to define principles that should govern AI.
Finally, not every law text is a pleasure to read. This one is. Readers should keep their eyes peeled for a footnote explaining that part of the text has been “drafted” by OpenAI’s GPT-3 machine learning system, and for a quote from Monty Python and the Holy Grail.
About the book
Leah Grolman is an associate at CMS Cameron McKenna Nabarro Olswang LLP and a member of the Society of Computers and Law. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official views and opinions of CMS.