UK government publishes response to AI White Paper

February 12, 2024

The UK government has published its response to its AI White Paper, which was published last year. It set out initial proposals to develop a “pro-innovation regulatory framework” for AI. The proposed framework outlined five cross-sectoral principles for the UK’s regulators to interpret and apply within their remits. The government also proposed a new central function to bring coherence to the regime and address regulatory gaps. 

The five principles were:

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

The government says there was strong support for these principles. It says that it remains “committed to a context-based approach that avoids unnecessary blanket rules that apply to all AI technologies, regardless of how they are used. This is the best way to ensure an agile approach that stands the test of time.” 

Since the publication of the White Paper, the CMA has published a review of foundation models to understand the opportunities and risks for competition and consumer protection and the ICO updated its guidance on how data protection laws apply to AI systems to include fairness.

The government has written to several regulators affected by AI to ask them to publish an update outlining their strategic approach to AI by 30 April. It is encouraging regulators to include:

  • An outline of the steps they are taking in line with the expectations set out in the white paper.
  • Analysis of AI-related risks in the sectors and activities they regulate and the actions they are taking to address these.
  • An explanation of their current capability to address AI as compared with their assessment of requirements, and the actions they are taking to ensure they have the right structures and skills in place.
  • A forward look of plans and activities over the coming 12 months.

The government also proposed an AI central function. It says it has started developing the central function to support effective risk monitoring, regulator coordination, and knowledge exchange. It has also published guidance to support regulators to implement the principles effectively.

The government highlights three broad categories of AI risk: societal harms; misuse risks; and autonomy risks.  

Societal harms

  • Preparing UK workers for an AI enabled economy – there will be guidance on the use of AI in HR and recruitment. In addition, it will publish a skills framework later this year, as well as funding AI-related courses.
  • Enabling AI innovation and protecting intellectual property – creative industries and media organisations have particular concerns regarding copyright protections in the era of generative AI. The Intellectual Property Office convened a working group made up of rights holders and AI developers on the interaction between copyright and AI. However, it is now clear that the working group will not be able to agree an effective voluntary code. The government intends to do further research and engagement in this area.
  • Protecting UK citizens from AI-related bias and discrimination – regulators such as the ICO have updated guidance.
  • Reforming data protection law to support innovation and privacy – the Data Protection and Digital Information Bill will expand the lawful bases on which solely automated decisions that have significant effects on individuals can take place.
  • Ensuring AI driven digital markets are competitive – the CMA has carried out an initial study and the Digital Markets, Competition and Consumers Bill aims to give it the tools it needs to regulate digital markets.
  • Ensuring AI best practice in the public sector.

Misuse risks

  • Safeguarding democracy from electoral interference – among other things, the Online Safety Act 2023 will capture specific activity aimed at disrupting elections where it is a criminal offence in scope of the regulatory framework. 
  • Preventing the misuse of AI technologies – the NCSC published guidelines for secure AI system development in November 2023. The Online Safety Act and the Product Security and Telecommunications Infrastructure Act also aim to provide regulation in this area.

Autonomy risks

  • The government has examined the case for new responsibilities for developers of highly capable general-purpose AI system. It says that while voluntary measures are a useful tool to address risks today, it anticipates that all jurisdictions will, in time, want to place targeted mandatory interventions on the design, development, and deployment of such systems to ensure risks are adequately addressed.  
  • It is also working with international partners on AI governance.