Mark O’Conor and Gareth Stokes take a wider view of AI regulation and where the balance lies, sharing points that many businesses grapple with today.
In Dubai recently, we were struck by the pace of AI adoption and the easy acceptance that AI is already 100% normal within people’s lives. In the UAE tens of thousands of civil servants have already undergone AI literacy training, AI education is compulsory in their schools (from kindergarten to year 12) and a show of hands in the room at our Tech Summit showed overwhelmingly that AI is used by everyone whether at home or work.
We’ve already blurred the work vs home divide, through the introduction of readily available hand-held computing power; we can take a call, answer an email, review a document on the touchline whilst trying to ‘stay present’ for the children we are supposed to be supporting.
Add now the triumvirate of amazing compute power, super smart algorithms and oceans of data, all nicely packaged with a UI that anyone can understand and it means AI is now democratised across the world. So, is any attempt to regulate futile? Or should we be scared and regulate-back the perceived bad actions? Of course, the truly bad actors will ignore all regulation so the regulatory response can never be total. And if we do regulate, do we risk the duopoly of regulation we discussed 25 years ago when the first wave of e-commerce regulation hit the streets: namely – can we simply apply ‘real world’ legal principles (whether civil or common law based), all understood and developed over centuries, or does new technology need a different, parallel (artificial?) set of rules to cater for the new artificial world?
Society as a 4D fractal: increasing complexity over time
The world continues to become ever more complex. Your authors are just about old enough to remember a time when the limitations of electric typewriters and drafts exchanged by post meant that a contract of 100 pages was considered long. Ubiquitous word processing, tracked changes and email to shuttle massive documents back-and -forth instantly has seen contract length expand by an order of magnitude. The ‘Better Regulation’ project over 15 years ago reviewed of the cost of compliance with law for corporations, measured by assessment of the legal obligations, the number of reports or other touch points to government that would be required, and the cost of doing so. The answer was in the billions and led to the policy of ‘one in, three out’ for business regulation, a policy which has gone by the wayside, perhaps in recognition that it’s just too difficult to apply. Truly the Jeavons paradox is alive and kicking, whereby attempts for increased efficiency of a resource can lead to increased overall consumption, rather than a decrease. Add AI to the mix and the paradox is proven even more strongly: as it becomes easier to use new tech, people use more of it, leading to the ‘backfire effect’ whereby the efficiency gains hoped for are completely cancelled out.
So, with the cat truly out of the bag and possibly pursued by a robotic dog, what are businesses to do? Should you take a ‘global highest standard’ approach and apply the highest standard rules globally? At least that way you know you are covered, even if the admin is a little heavy and, surely, you’ll be best able to defend a future risk, compliance or negligence case. Or do you take a jurisdiction-by-jurisdiction approach for your organisation? This will be bespoke and potentially allows for a few jurisdictions to apply lower regulatory standards, but it would be costly to manage the differences around the world. Or should you follow the heretical approach, sometimes whispered behind closed doors, get on with the AI development, make money, so much money in fact that you can buy compliance later when it is affordable…
Whatever steps a business takes, it is crucial, as the Canadian techlaw guru Duncan Card maintains, to be clear regarding the technological and business definitions of AI. The risks are profoundly different across those two definitions with very different risk management paths. The technological perspective is that which is discussed at length in the media thanks to warnings from Hawking, Hinton, Gates and others. The business perspective is to look at highly capable systems (HCS) which can undertake current business operations better, faster and less expensively. The HCS can for example monitor traffic congestion and instantly (and continually) balance 4,000 traffic lights/intersections to maximize traffic flow. The risks of HCS are very different from true AI. They centre on veracity/reliability of outcomes, misappropriation of IP, new/untested, transparency of operation, industries most vulnerable to early job restructuring by HCS, and the fact that the use HCS is not controlled in-house as a product and is mostly delivered remotely as SaaS or DaaS. Unfortunately, business hears both paths being discussed in the media all the time and that can be confusing.
The Case for AI Regulation – Freedom Under the Law
The primary argument for regulating AI is rooted in the potential risks and ethical concerns associated with its deployment. AI systems, particularly those utilising machine learning and deep learning, can make decisions that significantly impact individuals and society. These decisions can range from mundane tasks like recommending movies to critical ones like diagnosing medical conditions or determining creditworthiness. Without oversight, AI risks replicating patterns of discrimination, especially when trained on data that reflects past inequalities.
Transparency is another critical concern. Many AI systems operate as “black boxes,” making decisions that even their creators struggle to explain. Regulation can drive the adoption of explainable AI, giving individuals the right to understand and challenge automated decisions while helping businesses build trust and defend their systems in regulated sectors like finance and healthcare.
Rather than stifling innovation, those in favour of regulation argue that clear rules can accelerate it. Regulation reduces uncertainty, enabling businesses to invest confidently in AI development and deployment. It sets common standards for safety, robustness, and accountability that raise the overall quality of AI products, helping avoid reputational and legal damage while fostering a competitive edge for companies who get it right.
Proponents of regulation believe that jurisdictions embracing robust AI governance are more likely to attract investment, talent, and global partnerships. Meeting high regulatory standards not only supports ethical practice but opens access to tightly regulated markets like the EU. Companies that act early gain a first-mover advantage, while those that delay may face costly retrofitting or exclusion.
The Case Against AI Regulation – Bureaucracy, Not Protection
On the other hand, some argue that regulating AI could stifle innovation and hinder technological progress. Moreover, the fast-paced nature of AI development means that regulations can quickly become outdated. Crafting effective regulations that keep pace with technological advancements is a formidable challenge.
At the same time, critics argue that current efforts to regulate AI may be focusing on the wrong threats entirely. Most regulatory proposals zero in on relatively narrow harms – like bias, transparency, and data protection – while ignoring the more existential risks posed by advanced AI. These include the challenge of AI alignment – the problem of ensuring that increasingly autonomous systems pursue goals compatible with human values—as well as more tangible dangers like AI-powered bioterrorism, the development and deployment of lethal autonomous weapons systems (LAWS), the large-scale dislocation of workforces, and the potential for economic or political instability that could lead to mass unrest or social revolution. Eliezer Yudkowsky, one of the earliest thinkers on AI safety, has argued that unaligned superintelligent AI could pose an existential threat to humanity itself, and that we are vastly underprepared for this scenario. Geoffrey Hinton, one of the “godfathers” of deep learning, has similarly voiced concern that AI systems may soon surpass human understanding and control, making the current regulatory focus on incremental, sector-specific harms feel dangerously shortsighted. Despite these dire warnings, many current legislative efforts have yet to grapple meaningfully with these high-consequence threats, focusing instead on near-term data and process risk through procedural safeguards rather than long-horizon systemic risks.
There is also the risk of regulatory capture, where powerful entities influence regulations to their advantage, potentially stifling competition and innovation, and creating barriers to entry for innovators trying to enter the market. We’ve seen this before in other markets, where the rules are written to describe what the market leaders are already doing, rather than to meaningfully address and real mischiefs.
Finally, since it is widely accepted that AI has the potential to drive significant economic growth and improve various aspects of life, from healthcare to transportation, a delay in realising those benefits could lead to avoidable misery for many. Overly stringent regulations could slow down the development and deployment of beneficial AI technologies as much as harmful ones, leading to its own kind of suffering for those not able to benefit from advances the technology would have brought but for the regulatory sclerosis.
Current Approaches to AI Regulation
Major powers are taking very different approaches to AI regulation, meaning that we will get to see the impact of these arguments play out. In a globally interconnected world, and with the table-stakes being as high as they are, for those in favour of strong regulation, this experiment itself might provide cause for concern. At present, the approaches to regulation in the EU, US and China are on very different paths:
- The European Union (EU) has taken a proactive stance with the introduction of the EU AI Act, which categorises AI systems based on their potential harm and imposes strict requirements on high-risk applications. The Act aims to ensure that AI systems are safe, transparent, and respect fundamental rights.
- The United States presents a contrasting picture: at the federal level, the approach is resolutely pro-innovation, with the current administration rolling back previous executive orders in favour of minimal intervention and encouraging AI leadership through deregulation. In contrast, individual US states are actively developing their own AI legislation on issues such as algorithmic bias, consumer protection, deepfakes, and training transparency – resulting in a fragmented and fast-evolving patchwork of rules that businesses must navigate. However, if federal efforts to ‘press pause’ on state level rules are successful, this could see the laissez-faire approach apply at all levels in the US.
- China has implemented stringent regulations on AI, particularly as regards Generative AI and in areas related to national security and social stability. The Chinese government has also invested heavily in AI research and development, aiming to become a global leader in AI technology.
Can We Take the Best of Both Worlds?
So we have done it again – made life more complex. But feeling overwhelmed by complexity misses the point. The creativity we show in designing our future is only limited by our imaginations. Your authors are truly excited by what might happen next. The arc of history tends to bend toward progress in the medium-to-long term – technological and social. We will, we are convinced, accelerate medical research, achieve universal food supply, figure out how to cope with rising tides and temperatures, and that’s all merely looking at what we do on the surface of our planet.
But even as we marvel at the possibilities, we must recognise that AI now places humanity at a fork in the road. One path leads toward dystopia: a world where AI exacerbates inequality, enables bioterrorism or autonomous warfare, hollows out job markets, and ultimately concentrates power in the hands of a few. This path is not fiction; it is a plausible outcome if we fail to act. The other path offers the glimmering promise of a post-scarcity society, where AI helps us overcome age-old problems like hunger, disease and poverty, enabling lives of creativity, dignity and fulfilment. Some AI leaders have explicitly pointed to Iain M. Banks’s “Culture” novels as a model for such a future.
Regulation, then, is not about bureaucracy for its own sake—it is about making sure we do not stray blindly onto the dark path. The goal must be to design governance frameworks that don’t get in the way of innovation that serves humanity, but instead severely punish conduct that would imperil us all. Perhaps the most important thing for AI regulation is to look not just at what AI does today, but to the horizon of what it might become. And what of the legal profession? We must support this vision by advocating for sensible, horizon-focused frameworks, by helping organisations identify and avoid ethical pitfalls, and by ensuring that the rule of law can evolve alongside our most transformative technologies. All this is best achieved by advocating for interoperability through common international standards.
So, climbing down from the hyperbole, what should a business do now, today, in planning for tomorrow? What do we know?
- Everyone is using AI
- Everyone is using AI for personal and business reasons
- Everyone hopes that it will save time, and often it does
- No one has figured out what to do with that spare time
- No one has figured out whether things that took longer to do, still have the same value when done more quickly
- Our obsession with detail means we risk diving into potential edge-cases which might never happen…but just in case we should cater for them (creating longer and longer rules)
- We are holding machine intelligence to a standard that we struggle to maintain for ourselves
This is not new-news; Lord Clement-Jones’s House of Lords AI Select Committee report in 2018 drew evidence from educators, lawyers, economists, technologists, scientists and regulators. We all understood the questions, way back in 2018, but are we any nearer to the answers?
We have come full circle. Shouldn’t we, as a legal fraternity, be arguing for common-sense, rules that most nearly replicate existing real-world principles, so that we can smooth the path of business and be the solution-architects, not the ‘department of lost opportunity’?
Here’s a thought experiment. What if we decide that the US moratorium against AI law is ‘a good idea’. What does the business world look like next year, the year after, given the pace of change and AI advancement? Does that lead to a fragmented approach with very different tech stack for the US, EU and China? Is there a point when it gets too scary and we do need top-down regulation, or has that point already passed?
(With wise counsel and improvements from Lord Tim Clement-Jones and Duncan Card).

Mark O’Conor is the Vice President of SCL and Partner & Chair of the London Client Group at DLA Piper UK LLP in London.

Gareth Stokes, Partner, DLA Piper focuses on AI, information technology, strategic sourcing and intellectual property-driven transactions and is part of the leadership team for DLA Piper’s global AI Practice Group.
This article is also available in the special AI issue of Computers & Law, which is available to download here.