Ofcom’s strategic approach to AI

June 11, 2025

Ofcom has issued a report how it is supporting the safe innovation and use of artificial intelligence across the sectors it regulates, and streamlining the way it works.

Smarter communications

The industries Ofcom regulates have technology and innovation at their heart. As technologies evolve, new opportunities emerge that have the potential to drive better outcomes for consumers and businesses. For example:

  • Online platforms use automated content moderation to identify harmful content at scale and with greater speed, helping improve safety for their users.
  • Broadcasters use AI to generate real-time captions, translate content into multiple languages, and provide automated dubbing and audio descriptions.
  • Telecoms companies use AI to help keep their networks secure; and in the future, they may use AI to enhance network management.
  • Spectrum allocation could be optimised to help reduce congestion on networks and enhance network efficiency to deliver a better service for consumers.
  • Postal companies could further optimise delivery routes which could save money, reduce carbon emissions, and improve reliability and quality of service for consumers.

In general, Ofcom’s regulation is technology-neutral, which means regulated companies are essentially free to deploy AI as they see fit, without needing its permission, helping to enable faster innovation and growth.

That said, while AI affords new opportunities and benefits for businesses and consumers, it is important for Ofcom to stay ahead of any associated risks, and take action to mitigate them.

Supporting innovation

Encouraging and promoting economic growth is built into Ofcom’s duties, and it is working on a range of initiatives to support AI innovation to help achieve this. These include:

  • Creating safe spaces to experiment with technology. Together with Digital Catapult, Ofcom runs the SONIC Labs that are providing an interoperable (“Open RAN”) test-bed for mobile network equipment vendors to explore the use of AI in mobile networks.
  • Providing large data sets to help train and develop AI models. Its data can be used to train and develop AI models, improving their outputs. For example, its data sets on how spectrum is used in the UK has enabled academia and industry to develop state-of-art AI models for spectrum use cases.
  • Collaborating with other institutions to provide regulatory alignment. For example, Ofcom works with the CMA, ICO and FCA through the Digital Regulation Cooperation Forum to understand new AI applications such as agentic AI.
Mitigating risks

Ofcom says that both industry and consumers benefit from AI deployment, but the risks created or exacerbated by AI primarily flow to the consumer. These risks can cause serious harm to individuals, especially online. For example, two in five UK internet users aged 16+ say they have seen a deepfake and among those, one in seven say they have seen a sexual deepfake. Of those who say they have seen a sexual deepfake, 15% say it was of someone they know, 6% say it depicted themselves, and 17% thought it depicted someone under the age of 18.

To tackle deepfakes and a range of other serious online harms, it is implementing and starting to enforce the Online Safety Act. Its “safety by design” rules, which mean platforms should take down illegal content created by AI, and assess the risks of any changes they make to their services, aim to create a safer life online for all UK users, especially children, while at the same time ensuring that tech firms have flexibility and freedom to innovate.

How Ofcom is using AI

Ofcom has more than 100 technology experts, including around 60 AI experts, in its data and technology teams, including many with direct experience of developing AI tools. It is carrying out over a dozen trials in its own use of AI, aimed at increasing its productivity, improving its processes and generating efficiencies. These include using everyday third-party GenAI applications as well as creating GenAI based applications in-house. For example:

  • Streamlining the translation of broadcast content in response to complaints received, by using an AI translator in conjunction with its broadcast recording service. This has allowed Ofcom to re-deploy resources to focus on other priorities and reduced translation costs.
  • It developed a customised text summarisation tool to analyse large data sets across responses to consultations, to find patterns and themes more quickly and efficiently.
  • It has used AI to improve spectrum planning, with huge potential to increase the amount of data that can be transmitted over a given bandwidth, especially in built-up areas using high frequencies.

Over the next year, Ofcom says that it plans to accelerate the use of AI across its policy areas as appropriate, adopting a safety-first approach. In practice, this means continuing to trial AI tools and only rolling them out across the organisation once it is confident they are safe and secure.