UK government consults on policy statement on regulating AI

July 18, 2022

In its National AI Strategy, the UK government stressed the importance that the UK’s regulatory regime keeps pace with and responds to the new and distinct challenges and opportunities posed by AI.

It therefore plans to establish what it describes as a “pro-innovation” framework for regulating AI. The framework will be underpinned by a set of cross-sectoral principles tailored to the specific characteristics of AI, and is:

  • Context-specific. It proposes to regulate AI based on its use and the impact it has on individuals, groups and businesses within a particular context, and to delegate responsibility for designing and implementing proportionate regulatory responses to regulators. This aims to ensure a targeted and proportionate approach.
  • Pro-innovation and risk-based. It proposes to focus on addressing issues where there is clear evidence of real risk or missed opportunities. It will ask regulators to focus on high-risk concerns rather than hypothetical or low risks associated with AI. It wants to encourage innovation and avoid placing unnecessary barriers in its way.
  • Coherent. It proposes to establish a set of cross-sectoral principles tailored to the distinct characteristics of AI, with regulators asked to interpret, prioritise and implement these principles within their sectors and domains. To achieve coherence and support innovation by making the framework as easy as possible to navigate, it wants to support and encourage regulatory coordination, eg by working closely with the Digital Regulation Cooperation Forum and other regulators and stakeholders.
  • Proportionate and adaptable. It proposes to set out the cross-sectoral principles on a non-statutory basis in the first instance so its approach remains adaptable – although this will be kept under review. It wants regulators to consider lighter touch options, such as guidance or voluntary measures, in the first instance. As far as possible, the government will seek to work with existing processes rather than create new ones.

The cross-sectoral principles include:

  • Ensure that AI is used safely;
  • Ensure that AI is technically secure and functions as designed;
  • Make sure that AI is appropriately transparent and explainable;
  • Embed considerations of fairness into AI;
  • Define legal persons’ responsibility for AI governance; and
  • Clarify routes to redress or contestability.

The government is calling for views about its proposed approach, in particular with regard to the following questions:

  • What are the most important challenges with our existing approach to regulating AI? Do you have views on the most important gaps, overlaps or contradictions?
  • Do you agree with the context-driven approach delivered through the UK’s established regulators set out in this paper? What do you see as the benefits of this approach? What are the disadvantages?
  • Do you agree that we should establish a set of cross-sectoral principles to guide our overall approach? Do the proposed cross-sectoral principles cover the common issues and risks posed by AI technologies? What, if anything, is missing?
  • Do you have any early views on how we best implement our approach? In your view, what are some of the key practical considerations? What will the regulatory system need to deliver on our approach? How can we best streamline and coordinate guidance on AI from regulators?
  • Do you anticipate any challenges for businesses operating across multiple jurisdictions? Do you have any early views on how our approach could help support cross-border trade and international cooperation in the most effective way?
  • Are you aware of any robust data sources to support monitoring the effectiveness of our approach, both at an individual regulator and system level?

The call for views and evidence ends on 26 September 2022. A White Paper will follow for consultation in late 2022.