Who’s been regulating my AI?: Goldilocks and the House of Lords Report on LLMs and Generative AI

March 21, 2024

A team from DLA Piper dig deeper into the House of Lords report on balancing risk and innovation in AI regulation, in particular, the open source question

While much attention has been paid to the finalisation of the EU’s AI Act in recent weeks, developments in AI continue at a frenetic rate. On 2nd February 2024 the House of Lords Communications and Digital Committee published its report on large language models and generative AI.

That report covered a variety of topics, with particular attention on two topics:

  • what the HoL refers to as the ‘Goldilocks problem’ – or the challenges of getting the balance of investment and risk just right, especially in the context of open and closed AI models; and
  • copyright and liability issues arising from development, training and use of LLMs.

This article looks at the ‘Goldilocks problem’, considers open and closed access models, and evaluates the UK government’s approach to handling AI systems against the international landscape. Part two of the series will consider the points raised by the HoL regarding liability and copyright.

The ‘Goldilocks Problem’

The purpose of the HoL report was to examine the likely trajectories for LLMs over the next three years, and what the UK should do to capitalise on the opportunities and ameliorate the risks. In particular, the HoL recognised that governments will play a central role in determining which AI companies will flourish (referred to as ‘steerage’) due to reliance by developers on access to energy, compute and consumers – all of which are dictated to a significant degree by government policy and investment. However, the HoL quickly acknowledged that governments face what it refers to as the ‘Goldilocks’ problem – “getting the balance right between innovation and risk, with limited foresight of market developments.”

The EU has also faced this problem in its regulation of foundation models, or ‘general purpose AI’ using the definitions in the most recent text. The EU wanted to regulate these models tightly, but were unsure of where to set the boundaries, whilst facing pushback from Member States not wanting to stifle the innovation of their respective AI companies.

The EU AI Act landed on defining general purpose AI models that present a ‘systemic’ risk as those that were trained with computing power above 1025 floating point operations. It is yet to be seen whether this is the correct threshold, particularly as computing power increases and AI training techniques allow much more capable models to be trained using fewer parameters and less computation. It is easy to see how these developments could see models with the same unpredictable ‘emergent properties’ that present a systemic risk being trained with far fewer than ten septillion floating point operations.  

Open vs Closed Access Models

The first key consideration in tackling the Goldilocks problem is the contest between open and closed access models.

Open access models are made publicly available, allowing anyone to use, modify, and distribute the resources without significant restrictions. Distributors of open access models may also publish the parameters and allow others to fine-tune the model. Examples of open access models include the Technology Innovation Institute’s ‘Falcon’ AI models and Meta’s ‘LLaMa’ models. Open access models arguably create a ‘virtuous circle’ by enabling more people to experiment with the technology due to the ease of accessibility, by promoting greater transparency and opportunities for community lead improvements. An example of this is Hugging Face’s Transformers which began as a Hugging Face initiative, but has since been significantly expanded, supported and improved by the community.

Yet with such accessibility, there is the inevitable risk of bad actors. Open access models often implement acceptable use policies to prevent misuse, but there are concerns that not enough guard rails are in place for these systems, given the ease with which individuals are able to circumvent the security measures.

There have been reports of individuals, including high school students, using open source image generation tools together with pictures taken from social media to create deepfakes of individuals they know in real life – often of an indecent nature. This fear is further emphasised as AI models will only become more powerful and with the greater proliferation of open access models anticipated, critics are concerned it could lead to catastrophic risk. Furthermore, once the technology is made public, there is no ‘undo’ option available. While it might be feasible to integrate tracking features into models, research in this area is still in its infancy. Additionally, numerous models will be hosted abroad, presenting significant obstacles to supervision and regulation.

However, Yann LeCun, Meta’s chief AI scientist, believes existential risk to humankind is “preposterous.” Instead, he argues open access is a moral necessity as AI systems will determine the information we consume, which cannot be dependant on a closed system. Open access, allows individuals to contribute, thus ensuring a more democratic process and promoting greater diversity.

In contrast, closed access models limit the availability and use of AI resources. In some cases, these might be provided as ‘free to use’ services, and in other cases (especially in a corporate context) involve payment or subscription fees. Due to their closed nature, these models can be subject to a greater number of safety controls. These barriers to entry are designed to prevent bad actors from using the AI model for nefarious purposes. Additionally, the aim of the developers is driven and shaped by the interests of their shareholders, which are economic as opposed to attacking society. Closed models are less transparent than open access models, but it could be argued that the transparency in open access models is limited as publishing parameters does not demystify the black box decision-making of an AI system. However, there have been high profile examples of users ‘jailbreaking’ closed models and finding ways to persuade them to generate content that these closed models should normally refuse to create. Again, many individuals (including very high profile celebrities) have been victims of the creation of deepfake images by users who have found ways around the guardrails imposed on closed AI models.

The HoL report also notes that closed access models are often adopted to protect intellectual property and maintain competitive advantages. Concerns have been expressed that this may lead to the creation of monopolies, especially given the first mover bias we have seen within the technology space. As noted by Ben Brooks of Stability AI, we have “one search engine, two social media platforms and three cloud computing providers”. Given this recent history, there are concerns that Big Tech is arguing for closed systems to avoid potential disrupters to their market power. Not only could this erode competition, but the closed system means there are also fewer downstream opportunities for other businesses to examine and experiment with the underlying technology.

Moreover, the HoL notes that an AI monopoly would create a single point of failure. Though, closed models are more secure than open access models, they are not impenetrable. Breaches are still possible from hacks and leak operations, espionage, and even disgruntled employees, indicating that even meticulously safeguarded systems may eventually be compromised. Such a failure could be more catastrophic than the risk envisaged with respect to open access models, because (so far) closed access models are more sophisticated and powerful, and the expectation is that closed models may continue to lead open models. Nevertheless, politicians are leaning towards favouring closed systems over open access, in part because they perceive there to be a catastrophic risk associated with the ease in which bad actors can gain access to powerful AI tools.

On the above basis, the HoL has suggested the Government should ensure fair market competition through policy, to safeguard UK businesses from being marginalised in the rapidly expanding LLM industry. With its distinct strengths in mid-tier businesses, the UK stands to gain the most from a strategic blend of open and closed source technologies. In particular, focusing on not restricting the innovation of low-risk open access model providers. To this end, the HoL recommends that the Government should collaborate with the Competition and Markets Authority and vigilantly monitor the competitive landscape in foundational models to uphold fair competition.

Political and Regulatory Landscape

Throughout the HoL inquiry, concerns were expressed that politicians could fall victim to allowing ‘regulatory capture’. The reasons for this are partly due to the fears around the potential catastrophic events that could be caused by AI, but also because the public sector is heavily reliant on the knowledge developed in the private sector, and a back-and-forth flow of staff between private and public sector organisations.

After citing a number of close links between private sector AI companies, investors and those in public roles related to the regulation of AI, the HoL has called for greater public information on the types of mitigations taken to reduce conflicts of interest and build greater confidence in the Government. Though lobbying, asymmetries of information and conflicts of interest are not necessarily new to the relationship between the private and public sector, the concerns here are greater given the power of the technology, the extent of the asymmetries in knowledge and the importance of this pivotal moment in time. The HoL expressed a concern that regulatory capture could also result in UK politicians ignoring the potential innovation opportunities posed by AI.

In March 2023 the Government published its “pro‑innovation approach to AI regulation”, but it has not yet developed the holistic approach suggested in that White Paper. The HoL considers this to be an economic drag on AI investment, as engineers flock to countries that do fund and promote the building and development of models. Further, the HoL’s evidence suggested leadership in AI safety and commercial prowess are closely linked. The logic here being that you cannot completely comprehend what is required in terms of safety without having the experience of working on the models. Development of AI and a deeper understanding of safety are interconnected. If the UK is not involved in building or testing models, then the individuals will not be equipped to handle the safety concerns. As the report notes, “should the UK fail to develop rapidly as a hub for the development and implementation of LLMs, and other forms of AI, it is likely to lose influence in international conversations on standards and regulatory practices”.

The report states that the Government needs to recognise that long‑term global leadership on AI safety requires a thriving commercial and academic sector to attract, develop and retain technical experts. Otherwise, the UK will lag behind its international rivals and become overly reliant on a few foreign technology companies. The HoL noted the following points from the Government’s Science and Technology Framework as the most important for the UK’s technological progress:

  1. increasing computing capacity;
  2. up-skilling individuals, including secondments between industry, government and regulators;
  3. funding academic commercial spin-outs;
  4. developing a sovereign LLM, and
  5. regulatory certainty.

Focusing on regulation, the HoL notes that the UK could draw valuable insights from the United States’ approach to context-specific regulation, the European Union’s efforts to address significant risks, and China’s proactive stance on technological advancement – all while swiftly tackling societal and security issues. However, directly copying these models may not be suitable for the UK given its unique context, including differences in market dynamics, regulatory appetite, and political goals compared to the US, EU and China. Adopting a balanced approach would position the UK advantageously to champion these policies on a global stage, thereby enhancing the credibility and support for its domestic AI ecosystem. The Glenlead Centre supported legislation, stating that without it, the UK would be a ‘rule‑taker’ as businesses will have to comply with more stringent rules set by other countries, just as we have seen with GDPR setting a ‘gold standard’ for data protection that is aspired to globally.

Nevertheless, the report observes that when the Government is posed with the decision of choosing the correct approach with respect to the direction of regulation, it is once again confronted with the Goldilocks problem. The Government shouldn’t move too quickly, as it risks creating and crystalising poor rules, but inaction is not an option either, as new harms will continue to surface. Further, retroactively legislating could lead to protracted efforts to create a highly complex regulatory framework that struggles to dismantle already established business practices. Rachel Coldicutt, OBE of Careful Industries, cited the progress of the Online Safety Act as a cautionary tale, urging for a more robust strategic direction led by the Government supported by proactive measures to avert harm and encourage ethical innovation.

The HoL advised that extensive primary legislation is needed, but that specifically targeting LLMs was not advisable at this point due to the nascent stage of the technology, high levels of uncertainty, and a significant risk of unintentionally hampering innovation. Instead, it advised that the UK government focus on establishing a strategic direction for LLMs and crafting regulatory frameworks that support innovation effectively and swiftly. It endorsed the approach outlined in the overall White Paper. However, the progress in establishing the necessary central support functions has been disappointingly slow, as by November 2023, regulators remained in the dark about the operational status and functioning of these central units. This delay signals a misalignment in priorities and casts doubt on the Government’s dedication to creating the regulatory infrastructure essential for fostering responsible innovation. The HoL held that the effectiveness of existing regulators in securing positive results from AI hinges on them being adequately equipped and authorised. Therefore, in the HoL’s view, the Government must establish uniform powers for the primary regulators tasked with AI supervision, enabling them to collect data on AI operations and carry out technical, empirical, and governance evaluations. Additionally, the report states that it is crucial to implement significant penalties to act as effective deterrents against serious misconduct. This report (again) shows the HoL as being at the forefront of the debate regarding approaches to AI regulation.

That regulation is too hard, and that regulation is too soft, but this regulation is just right…

Evidently, in order to solve the Goldilocks problem, the UK government must create competition between open and closed access models and be proactive but measured in its approach to regulation. The trick will be adopting a strategy that is ‘just right’ in the hope that the bears don’t turn up unexpectedly.

Parman Dhillon is an Associate in the IP and Tech team with experience in tech agreements, outsourcing deals, IP transactional agreements and data protection.

Gareth Stokes is part of the leadership team for DLA Piper’s global AI Practice Group’ His practice focuses on  AI, information technology, strategic sourcing and intellectual property-driven transactions.

Mark O’Conor the Global Co-Chair for DLA Piper’s Technology Sector, co-chairs the firm’s AI Group, was previously UK Managing Partner and is the Vice President of the Society for Computers and Law.

This article was first published on the DLA Piper website and is reproduced here with their permission