Shaping the Shape-Shifters: Reflections from the SCL Policy Forum on AI Regulation

October 14, 2025

Dr Fernando Barrio captures some of the insights aired at the SCL’s recent AI Policy Forum.

AI shows no sign of slowing down, neither does the pace of conversation around its regulation. An important and timely contribution to the debate was enabled by the AI Policy Forum, held on 10 July 2025, within the striking Octagon at Queen Mary University of London. With the European Union unveiling its AI Code of Practice, the UK advancing an AI Opportunities Action Plan and the Data (Use and Access) Act newly in force, and myriad AI policies coming out around the globe, the conversation has shifted from theoretical speculation to urgent necessity.

“Have we been here before?” Regulation and innovation in the AI arena

The opening session featured Gabriela Commatteo, Jenifer Swallow and Minesh Tanna, who explored the delicate interplay between inventiveness and oversight.

Jenifer argued that lawyers often receive law rather than shape it, a passive posture that forecloses the profession’s capacity to anticipate and head off legal harms.  

Minesh Tanna described how global aviation standards emerged through conscious cooperation, with the panel criticising the absence of sunset clauses in safe harbour regimes such as intermediary liability, which allows harm to fester rather than be revisited. A clear prescription emerged: existing harms should be mapped against existing laws, and enforcement should be innovated, understanding that in general terms it is the misuse of technology which needs to be regulated, not the technology itself. After all we don’t regulate electrons but the damage they cause.

The lesson holds in the context of AI, with its intricate liability chains, demanding leadership, cohesion and investment, sustained by sensible incentives. Swallow saw organisations like SCL as perfectly placed to lead such mapping, guiding both the law as it is and as it could be.

Data, copyright and trust

The second panel saw Andrés Guadamuz and Raj Shah tackle “Ownership in AI systems: IP and Data Governance of AI inputs and outputs”, deconstructing intellectual property and data protection challenges in the age of generative systems.  

Raj addressed transparency as the linchpin of public trust. The UK government envisions AI permeating education, healthcare and beyond, but without trust such a vision remains aspirational. Shah referenced the collapse of a health data sharing initiative where public resistance stemmed less from principle than from poorly communicated benefits. He argued that transparency must be intelligible, not buried in dense legalese, and the UK lacks the statutory transparency duties found in the EU’s AI Act.

On the global front, he acknowledged that supervisory authorities such as France’s CNIL have adopted pragmatism, but lawful bases for processing sensitive data remain a grey area. A statutory UK basis for such processing, especially in antibias efforts, would help both developers, in securing legal clarity, and users, in accessing fairer systems.

Andrés Guadamuz, well known for his Technollama blog, brought a specialist lens to copyright and AI. He observed that many artists, when confronted with the complex and uncertain terrain of AI infringement, are deliberately choosing not to enforce their copyrights when AI systems train on or mimic their work. This voluntary non-enforcement, he suggests, reflects both resignation in the face of legal ambiguity and a pragmatic acceptance of creative mashups. It raises critical questions about the balance between enforcement, collaboration and creative freedom.

Guadamuz also laid bare the core legal tensions in copyright and generative AI. Key uncertainties revolve around whether AI generated outputs can be copyrighted and under whose name. This is a domain where UK law is distinctive, attributing authorship to whoever “made the arrangements for its creation”. Moreover, the use of copyrighted works as training data, often scraped en masse from the internet, further complicates matters. While some jurisdictions permit text and data mining with optout models, copyright frameworks are poorly equipped to handle the transformation of creative commons into proprietary models.

Is AI sustainable – in both senses?

The third panel on sustainability, with Fernando Barrio and Azfaal Mauthoor, introduced a dual imperative, where AI is both a tool and a target of sustainability.

AI’s promise in the sustainability realm is real. Azfaal recalled the work being done in different areas of Africa and Fernando explained how the Digital Technologies group of the Technology Committee of the UNFCCCs is focusing on AI for Climate Action.

The dialogue ranged across how AI could optimise energy grids, model climate impacts, support disaster prediction and improve ecosystem management. At the same time, the speakers acknowledged the real environmental cost of its vast energy and water consumption. Accordingly, Policy must both encourage AI applications that aid ecological goals and mandate that AI systems themselves meet sustainability benchmarks: innovation should not come at the planet’s expense.

Can lawyers shape future policy?

The final panel, chaired masterfully by SCL’s Vice President, Mark O’Conor, brought together Dana Denis-Smith, Lord Timothy Clement-Jones and Chris Marsden who looked to the future and the translation form policy to regulation.

Dana Denis-Smith emphasised the power of debate itself, noting that conversations about our future with AI are valuable precisely because they open up disagreement and multiplicity. She highlighted the synchrony of the Forum and the EU’s AI Code of Practice released that week, with a central argument that voluntary codes are insufficient for technologies of epochal scale. Tactical regulation leaves society on the defensive and vulnerable to outcomes it cannot manage, so making bold international frameworks is not optional but imperative.

Chris Marsden offered a conceptual scaffold, distinguishing three legal modes: self-regulation (“unregulation”), co-regulation, and state regulation. The UK has historically oscillated among them, especially under European mandates, but Generative AI tests this model. The current “first, do no harm” posture, with exceptions only for deepfakes and disinformation, Marsden judged as “regulatory hallucination”, or the semblance of oversight without real effect, as Generative AI demands principled co-regulation or stronger structures to protect the public interest.

Lord Timothy Clement-Jones closed the Forum’s reflections on trust and responsibility, demanding ethics, transparency and demonstrable human benefit so people accept AI in a way that they can trust. He encouraged businesses to ask whether they are here merely to serve shareholders or are they seeking to benefit wider stakeholders: customers, employees, and society? For Clement-Jones, responsible innovation is not optional; it is an activity that requires cross disciplinary collaboration and inclusive governance grounded in values, not just legality.

What did we learn?

Taken together, the contributions at the SCL Forum highlight that law must move from reactive to anticipatory. Swallow’s call to map enforcement against harms; Shah’s public-friendly transparency; Denis-Smith’s plea for global frameworks; Marsden’s regulatory typology; Guadamuz’s insight on copyright ambivalence; Clement-Jones’s ethical imperative; and the sustainability panel’s reminder of dual accountability, are all threads converging on the same truth: AI governance must be enforceable, intelligible, ethical and planet-aware.

The Forum also affirmed that the legal profession is ready for a more active role to shape, not merely apply, regulation, illustrating the need for global cooperation, societal trust, environmental responsibility and fair treatment of creative labour. As the UK reshapes its digital regulatory architecture, in dialogue with European and international norms, the SCL and its Policy Forum is where the legal community seizes its role as architect of technological futures.

Dr Fernando Barrio is a Lecturer in Business Law, both in the School of Business and Management and the School of Law or Queen Mary, University of London. His research is currently around AI, accountability and Human Rights, creative industries (IP+), Smart Farming, sustainability, and the development of human-centred technology regulatory principles.

This article is also available in the special AI issue of Computers & Law, which is available to download here.