Christopher Millard ponders the wide-ranging ideas about AI put forward by Professor Richard Susskind in his latest work.
Who is Richard Susskind and why is he writing about AI?
For once, it is probably true that an author really does not need any introduction, at least to anyone reading this review who is a member of the Society for Computers and Law (SCL). For the uninitiated, however, Professor Richard Susskind is President of SCL and Special Envoy for Justice and AI to the Secretary-General of the Commonwealth, and for 25 years he was Technology Adviser to the Lord Chief Justice of England and Wales.
When I first met Richard, he was in search of legal advice. His doctoral supervisor at Oxford, the brilliant computer lawyer and legal informatics pioneer Colin Tapper, had pointed out that someone might input a real-world fact pattern into a complex decision tree system that Richard had developed with the Oxford academic Philip Capper. In reliance on the output, an expensive mistake might be made. This was not too far-fetched a scenario given that the software was included on a couple of floppy disks with each copy sold of their book: Latent Damage Law: The Expert System. The year was 1988, Richard had already been working for seven years in the field that we now call ‘AI’, and this was the world’s first fully operational rules-based expert system for lawyers. It was also the era when what we now call ‘AI law’ began to emerge and those of us with a computer law (rather than legal tech) focus were trying to figure out who would be deemed to be the author of a ‘computer-generated’ literary, dramatic, musical or artistic work under the UK Copyright, Designs and Patents Act 1988.
Much water has passed under many bridges since then. Along the way, Richard has published a shelf of other books about the impact of technology on the law, and he has also become something of a guru well beyond the confines of the legal domain. The Future of the Professions, co-authored with Daniel Susskind, caused a stir with provocations in the spirit of: “Hey lawyers / actuaries / architects / brain surgeons / [insert your profession here], what’s so special about those, frankly often mundane, tasks which are the bedrock of your prestige and income? You may not realise it, but clients / patients / etc aren’t interested in how you do things in terms of process, they just want good outcomes.” The implicit warning was: “You can stay in denial if you want, but the robots are coming anyway.”
Fast forward to 2025 and Richard has embraced the challenge of tackling a fundamental question of our time: how should we think about AI? This, we are told, is a big deal. Indeed, Richard argues that “balancing the benefits and threats of artificial intelligence – saving humanity with and from AI – is the defining challenge of our age.”
He is by no means the first to grapple with this issue. In recent years, there has been a virtual tsunami of articles, books, policy papers, international declarations, legislative initiatives, and more, about AI and its potential impact on life, the universe, and everything. Calls abound to “do something” to address a perceived proliferation of AI risks. Such risks fall on a remarkably broad spectrum ranging from the mildly irritating, right up to full-blown existential. So far, most AI policy and governance initiatives have become stuck at the level of abstract principles, while attempts to devise comprehensive regulatory frameworks containing granular rules (such as the EU’s AI Act) may founder under the weight of their own complexity.
All of this makes this new book on how to think about AI rather timely.
What’s in the book?
The book is in five parts. Part one, ‘Understanding AI’, is a primer and includes a discussion of what AI is, and is not. Part two invites us to ‘think differently’ about AI, particularly in terms of distinguishing processes (how AI works) from outcomes (what AI can do). Part three looks at potential impacts of AI on organisations and society, and how these might vary depending on the assumptions we make about the status quo versus the possibility of disruption. Part four is about confronting the risks associated with AI, with suggestions for how to classify such risks and recommendations for what should be done in response. Part five takes us on a journey, in fact multiple potential journeys, into the future. Although the author would no doubt prefer that we all read the book from start to finish, each part works well enough on its own. I anticipate that at least a few readers will be selective in what they read or at least will change the sequence of the parts based on their interests and pre-existing knowledge. I did read the chapters sequentially, but I enjoyed parts four and five the most.
Over the years, Richard has developed a habit of including in each new book a recap of the main themes of his previous work to date. In some cases, the review is in a standalone form, as for example in the introduction to The End of Lawyers? (2008). In ‘How to Think about AI’, the development of Richard’s ideas is more integrated, with such framing especially prominent in part two, where process-thinking is distinguished from outcome-thinking, and in part three, where automation, innovation, and elimination are contrasted. Some might regard such restatements of previous arguments as somewhat derivative, or even self-indulgent. Personally, I expect they will be helpful to most readers, whether as a refresher for people who know some of the back story, or as a primer for those for whom this is all new. The core concepts have stood the test of time, and it is interesting to see how they can help us to think more clearly about AI.
Understanding AI and Thinking Differently
From the outset, the author makes it clear that his approach to AI is “pragmatic rather than theoretical”. If you are looking for a technical deep dive into how AI systems work under the hood, you should look elsewhere. On the other hand, if you are keen to get a flavour of what AI can do now, what it might do in the future, and how that might affect us all, then you are in the right place.
To set the scene for all that follows, chapter 1 includes ‘A Very Short History of AI’. This is a fast-paced and lively survey that takes us all the way from Classical antiquity (the female robots in Homer’s Iliad) right up to ChatGPT. Chapter 2, ‘On Technology’, posits that digital technology is characterised by systems and machines that are becoming ‘increasingly capable’, enabled by technologies that are ‘advancing exponentially’, with ‘no apparent finishing line’. We are urged to bear in mind that these are very early days for AI, with dramatic developments likely to be facilitated by ‘not yet invented’ technologies. Meanwhile, even though tools like ChatGPT represent nothing more than ‘faltering first infant steps’, we already find ourselves struggling to keep up with how such systems work. We should assume that this ‘state of incomprehension’ will only grow.
Given the rapid pace of development, Susskind suggests that we should not dismiss the currently available AI tools because they are imperfect. I’ve noticed that when people learn that I am working on AI law and governance, a common first reaction is that they’ve ‘tried AI’ (typically a free version of ChatGPT or some other Generative AI tool) and they were unimpressed. Why, they ask, should anyone be worried about, or indeed take seriously at all, a system that ‘hallucinates’? I try to explain that what they have tried is not the current state of the art and that, more importantly, even the most powerful systems available today do not represent the end game. I sometimes also can’t resist the temptation to point out that many humans, including some in positions of high office, make stuff up all the time. We might instead ask how we will know in the future whether an AI has deliberately degraded the quality of its outputs just to fool us into thinking that it is a human rather than a much more articulate and precise machine.
Part two (chapters three to five) urge us to ‘think differently’. We should be careful to distinguish ‘process-thinking’, which focusses on how AI systems and indeed human brains work, from ‘outcome-thinking’, which asks what AI systems will be able to do, and what impacts they might have. One of the key reasons why this matters is that it provides an antidote to the ‘AI Fallacy’. Many people, especially professionals, argue that an AI can’t replace them because it can’t replicate or mimic the way they reason and work. That is to miss the basic point that comparable, or better, outcomes might be delivered in radically different ways. To his credit, Richard acknowledges that his own thinking has been through a paradigm shift. For most of the 1980s he was “a fully paid-up member of the process-thinker camp”, labouring to distil complex legal rules and human expertise into massively complex decision-trees. Having realised that this approach was not scalable, he has become an enthusiastic, and pragmatic, advocate of the outcome-focussed approach.
Making AI Work
In part three of the book (chapters six and seven), we are reminded that ‘automation’ typically involves little more than taking existing tasks carried out by humans and applying a layer of technology to try to boost productivity. Significant though the impact of such automation may be, it is ultimately backward-looking and may entrench significant limitations of traditional ways of working. More radical is ‘innovation’, which is about using technologies to achieve outcomes that were not possible before, and which in some cases might not even have been imaginable. As a result of innovation, particular tasks may no longer need to be undertaken, with or without automation. More radical still is the possibility of ‘elimination’, whereby technologies, including AI systems, might be deployed to make significant problems disappear completely, with the result that automation and innovation both fall away once the underlying problems are no longer there to be solved.
To provide practical examples, the author revisits some of his earlier work on dispute resolution. Legal research applications, litigation support systems, and the streamlining of court processes are all exemplars of automation, intended (merely) to make existing processes more effective and efficient. Contrast those developments with an innovation like online dispute resolution, whereby participants may not need to assemble physically at the same time in a particular courtroom, and ‘court’ may evolve from being a place to a service which can be delivered remotely and asynchronously. More radical still would be the use of technology to facilitate ‘dispute avoidance’ which might result in complete elimination of various legal issues which courts currently exist to adjudicate.
What about all those AI risks?
In part four, a structure is proposed first for sorting the many perceived AI risks into categories (chapter 8), then suggestions are made for ‘harnessing’ AI (chapter 9). Susskind identifies seven basic categories. Category 1 comprises existential risks which threaten the survival of the human race or of civilisation. These might involve weaponisation of AI by bad actors; a devasting outcome arising accidentally; or an autonomous AI itself engaging in destructive activity. Category 2 is risks of catastrophe, which might arise in the same three ways. Category 3 is political risks, including subversion of democratic processes. Category 4 is socio-economic risks, including sudden large-scale unemployment and massive societal inequalities. Category 5 is risks of unreliable performance, including unacceptable bias and system failures. Category 6 is risks of reliance on AI systems which operate in ways that are poorly understood or inexplicable. Category 7 is risks of inaction which may include lost opportunities and failure to avoid preventable harms.
The author notes that most of the current work on AI risks is focussed on categories 5 and 6, for example in relation to unfair discrimination, production of harmful or illegal content, and direct harm caused by AI systems. There is also a growing awareness, at least in some circles, of political risks including election interference, censorship, and unjustified surveillance (category 3). What we are not seeing, however, is much attention being paid to existential and catastrophic risks. This is perhaps because they appear, and indeed may be, very remote, but also because they are unlikely to be of much interest to politicians who are looking no further than the next election. Nevertheless, if a very low probability event might have existential or catastrophic consequences (think large asteroid strikes), then some preliminary assessment and contingency planning is probably in order. Category 4 (socio-economic risks) is interesting. While there is a great deal of public discussion, mostly informal, about the possibility that particular jobs might be eliminated by AI, politicians seem to be ignoring the risks associated with large-scale technological unemployment. Perhaps again, they see no political upside in raising awareness of this particular risk and hope that it won’t materialise on their watch.
The discussion in chapter 9 of what to do about AI risks is framed, helpfully in my view, in terms of ‘harnessing’, rather than ‘controlling’ or ‘containing’, AI. This is an acknowledgement that, for all the short-term fears and longer-term disaster scenarios, there is still a great deal of potential for AI to be used for good. Bearing that in mind, I would have liked to see more in the book about how to deal with the seventh category of AI risks which relates to inaction. What kinds of positive opportunity should be explored, and what responses might be made to anti-AI advocates whose efforts might result in serious harms occurring that could have been avoided if AI had been developed and deployed in specific contexts? Perhaps these risks of inaction only receive limited attention here because the author has previously written so much, and with such enthusiasm, about the benefits of embracing new technologies. There is, however, a brief discussion in the section on law and regulation about the danger, on the one hand, that AI regulation may be too narrow and specific (such as targeting chatbots as the concern du jour) while, on the other hand, there is a danger that overbroad regulation might impose excessive compliance burdens on developers and providers of, potentially beneficial, AI systems. For example, while lawyers and doctors might argue that there should always be a human expert in the loop to avoid harm to lay users, the greater risk might be a failure to address “the much greater social tragedy that most people on planet earth cannot afford legal or medical help at all”. I’m glad this point was made as I have for many years questioned the obsession amongst legislators and regulators with keeping humans in various loops, even where it is known that the judgement of the relevant individuals may be clouded (often due to subconscious bias), or that they lack the breadth of information, knowledge, and expertise that an AI might have. What will happen, for example, when autonomous vehicles reach a stage of development where it is unquestionable that they are safer than vehicles with human drivers? I suspect that societal resistance will continue for some time after that point, and that AI-controlled vehicles will need to be much safer than those driven by even excellent human drivers before the tables are turned and we wonder why we let so many humans remain in control of such dangerous machines for so long.
More generally, who should be assessing the risks of AI, especially AGI (artificial general intelligence), and formulating policy recommendations? If we want to ‘save humanity with and from AI’, we will certainly need to think carefully about whom to entrust with the most difficult ethical decisions. The default position, in the absence of agreed norms and enforceable standards, will be to leave these decisions up to the companies that develop and deploy AI systems, and often to just a handful of extraordinarily powerful individuals. Is this really what we want? As Susskind puts it: “We need to be guided by Plato, Aristotle, and Kant rather than — with respect — Sam Altman, Elon Musk, and Mark Zuckerberg”. I suspect most readers will agree with this assertion, though some may be less polite in the way they express it! Just as we don’t allow doctors and healthcare providers to be the final arbiters of medical ethics, so we should not assume that computer scientists and tech titans should be left to make the rules for AI. Susskind suggests that the interdisciplinary approach, with public consultations, that has been used to tackle some of the fundamental issues in medical ethics might provide a useful model for AI too.
To infinity and beyond
So where is this all heading? Early in the book, Susskind considers five hypotheses which he labels Hype, GenAI+, AGI, Superintelligence, and Singularity. In the hype hypothesis, the current breathless AI frenzy will fizzle out, and we will drift into the next ‘AI winter’ as people become disenchanted with unreliable systems that appear to have little practical utility. This hypothesis he rejects outright, pointing to the benefits that organisations and individuals are realising already, and the substantial financial and other commitments that are being made by businesses and governments to deploy existing AI systems. In the GenAI+ hypothesis, further development of GenAI systems will deliver improved and much more reliable versions of today’s systems which will be deployed widely and deliver major productivity gains. This he is confident will be demonstrated within a few years from now. The AGI hypothesis involves a more dramatic step forward, with AI systems matching or exceeding the capabilities of humans. Until recently, the consensus in the AI community was that this would take twenty to forty years to happen and might never be achieved. In the light of recent developments in GenAI, however, Susskind now believes we should be preparing for AGI, or at least ‘near-AGI’, to arrive between 2030 and 2035. This outcome is not guaranteed, but the implications are so significant that it would be prudent to start now to address the ‘what-if-AGI?’ question.
The hypotheses that would take us beyond AGI are explored further in part 5 of the book. If AGI is reached, then superintelligence, involving systems vastly more capable than humans, might become an inevitability. Building on AGI, machines might develop more powerful machines, which in turn might develop even more powerful machines, and so on, at a rapid and compounding rate. This may seem unlikely to many readers, but if Susskind’s predictions about AGI are accepted, then it doesn’t require a great leap to imagine that AGI systems might keep on improving, including autonomously, in their capabilities and impact. The Singularity (where humans merge with machines) is the most speculative, and by far the most controversial, of the five hypotheses. Again, however, developments are likely to be incremental. For example, current experiments involving brain-computer interfaces, in healthcare and other fields, may be early steps on a path that will one day lead to humans becoming hybrid cybernetic organisms, or cyborgs. So-called ‘not yet invented’ technologies may again have a dramatic impact on the direction and pace of change.
In chapter 10, the topic of machine consciousness is discussed. Artificial or machine ‘intelligence’ is already a controversial concept, and Susskind largely avoids that rabbit hole by preferring the term ‘highly capable machines’. If we focus on outcomes rather than processes, we don’t need to get bogged down in intractable philosophical arguments about whether AI systems exhibit even basic human intelligence. Machine consciousness, however, is an order of magnitude more challenging and it is interesting that he chooses to do a relatively deep dive here. This may seem like an arcane philosophical digression, but I expect that some readers will find it fascinating as it exposes how little we really understand about human consciousness and, indeed, what is ultimately distinctive or exceptional about being a human being.
In the final chapter of the book, entitled ‘The Great Schism’, Susskind adds to his earlier five hypotheses a sixth which he calls the ‘AI Evolution Hypothesis’. This takes us into territories that have been explored by science fiction writers over the years, but which will seem alien (in multiple senses of the word) to many readers. Once machines are immeasurably more capable than humans, will they keep us on? After all, we will be competing for scarce resources and they may decide it is for the best that organisms which appear to them as sophisticated as pond life appears to us are put out of their misery and are prevented from destroying the planet via pollution and conflict. Maybe humans will go quietly into that good night once they have created massively capable systems with at least quasi-consciousness. Why not send them off into the cosmos and accept that our job is done? Susskind argues that we still have some autonomy. He describes the main likely options as ‘joint venture’ (whereby humans and AIs co-exist), ‘merger’ (with full integration of human and machine capabilities), and ‘takeover’ (with humans ceding control to AIs). He advocates going no further than joint ventures, he has grave reservations about mergers, and he argues that takeovers should be resisted, assuming resistance has not already become futile.
Does the book achieve its stated goal?
‘How to Think About AI’ is a much-needed call to take a step back from the short-term hype about AI, while taking seriously its potential future significance and pushing the boundaries of what most people consider remotely possible. The book is both informative and interesting, and it is written in a clear and lively style which will make it accessible to a wide audience. Susskind eschews easy answers and acknowledges that the long-term outlook is very uncertain. In the face of many known knowns, known unknowns, and unknown unknowns, he provides a framework for envisioning how AI might have an impact on us, both individually and collectively, and how we might prepare, not just react, in a constructive fashion. He points out that AI is delivering significant benefits already and that we are not capable of anticipating all the ways in which AI might be deployed for good. However, he also pulls no punches when it comes to setting out the major challenges we face, and the potential for adverse outcomes (some potentially catastrophic or even existential) if we mess this up.
But does the book deliver on the promise in the sub-title to provide a ‘Guide for the Perplexed’? If you want a guidebook that will help you to think more clearly about potential paths and destinations for AI, and a framework for asking relevant questions about opportunities and risks for individuals and society along the way, then ‘yes’. The book does not tackle the technical intricacies of things like neural networks and token weighting in large language models, but nor should it as the author made clear in his opening chapter.
I suggest we ask instead whether it is a good thing to remain perplexed. What if, when you finish the book, you find that you have more unanswered questions, and are more bemused and disconcerted, than when you started? At this very early stage in the development of AI, the tentative nature of much of Susskind’s analysis of the future is, in my view, a feature not a bug. We should not expect, or indeed want, all perplexity to be dispelled. The author will have delivered value if readers find themselves better placed to ask relevant questions and engage in constructive discussions about AI that go beyond the all or nothing polarisation that characterises much of the current debate.

Professor Christopher Millard is Professor of Privacy and Information Law Queen Mary University of London and Senior Counsel to the law firm Bristows. He has more than 40 years’ experience as a technology lawyer in both academia and legal practice. He leads the Cloud Legal Project at QMUL and is Editor and Co-Author of Cloud Computing Law (Oxford University Press). He is a Life Fellow of the Society for Computers and Law.

How to Think About AI: A Guide for the Perplexed
Published by Oxford University Press
ISBN 9780198941927
March 2025
£10.99
This article is also available in the special AI issue of Computers & Law, which is available to download here.