UK Law
ICO consults on enforcement procedural guidance
The Information Commissioners Office (ICO) is consulting on draft enforcement procedural guidance. It explains when and how the ICO opens investigations, uses its information gathering powers, how enforcement decisions are made and how settlement with reduced fines may be appropriate. The consultation ends on 23 January 2026.
Ofcom issues update on investigation into online suicide forum
Ofcom has provided an update on its investigation into the provider of an online suicide forum under the UK’s Online Safety Act. On 1 July 2025, the forum implemented a voluntary block to restrict users with UK IP addresses from accessing the service. Ofcom has been actively monitoring these restrictions to check they are maintained consistently and to make sure the service does not promote or encourage ways for UK users to avoid them. Ofcom now have reason to believe, from evidence provided to it by the Samaritans on 4 November 2025, that the service is available to UK users. It is therefore now progressing its investigation as a priority, and it aims to reach a conclusion as swiftly as it can. It will provide further updates on this investigation as soon as possible.
Ofcom issues call for evidence about the use and effectiveness of age checks by online services, and the use of app stores by children
Ofcom has issued a call for evidence about the use and effectiveness of age checks by online services, and the use of app stores by children. Ofcom is required to produce separate reports on these two issues under the Online Safety Act. To inform its understanding and analysis, it is gathering a wide range of evidence and views focusing on two areas. The first is how providers of regulated services have used age checks to comply with their online safety duties; how effective checks have been; and whether there are factors that have prevented or hindered the effective use of age checks. The second is about the role app stores play in children encountering harmful content; how app stores currently use age checks and their effectiveness; and whether greater use of age checks or other safety measures at app store level could improve children’s online safety. The call for evidence closes on 1 December 2025. Taking responses and other evidence into account, it will publish and submit its report on the use and effectiveness of age assurance to the UK government by July 2026, and its report on the use of app stores by children by January 2027.
ICO issues £200,000 fine for sole trader who sent nearly one million spam texts
The ICO has fined a sole trader £200,000 for sending almost one million spam texts about debt solutions and energy saving grants. The trader has also been issued with an enforcement notice ordering him to stop sending marketing messages without the appropriate consent. The trader came to the ICO’s attention through several previous investigations. Following investigation, the ICO concluded that between 3 December 2023 and 3 July 2024, the trader knowingly and deliberately transmitted or instigated the transmission of 966,449 text messages without valid consent, breaking direct marketing rules. This resulted in 19,138 complaints via the 7726 spam reporting service. He has appealed the ICO’s decision.
Bank of England issues update about Digital Pound
The Bank of England and HM Treasury are continuing to explore the case for a digital pound. In a recent update, they said that no decision has been made on whether to introduce a digital pound. The current “design phase” of the digital pound workplan runs through 2026. They have focused on three priorities: advancing technical work on shared public infrastructure and hands-on experimentation to support private-sector innovation in money and payments; investigating how a digital pound and other types of digital money can interoperate with existing forms of money and payment systems; and gathering and integrating a range of stakeholder evidence to inform the design of any potential digital pound. The digital pound project continues to progress, with a strong focus on evidence-based design, industry collaboration, and alignment with the UK’s broader payments vision.
Updated judicial guidance on AI published
Updated guidance to assist Judicial Office Holders in relation to the use of AI been published. It replaces the guidance document issued in April 2025. The refreshed guidance adds to the glossary of common terms and expands on the risks of bias in training data and AI hallucinations which generate incorrect or misleading information. It provides further advice on confidentiality, reminding judicial office holders not to enter private information into public AI tools, and signposts where to report any inadvertent disclosures as data incidents. The updated guidance applies to all judicial office holders for whom the Lady Chief Justice and Senior President of Tribunals are responsible, their clerks, judicial assistants, legal advisers/officers and other support staff.
UK government announces new laws to target online abuse and pornography
The UK government has announced that it will be taking measures under the Crime & Policing Bill to make a new criminal offence of depiction of strangulation in pornography, which will be designated as a priority offence under the Online Safety Act. This means that platforms will be held accountable and must make sure that content does not spread, which can lead to normalising harmful practices in people’s private lives. They will be required to take proactive steps to prevent users from seeing illegal strangulation and suffocation content. This could include companies using automated systems to pre-emptively detect and hide the images, moderation tools or stricter content policies to prevent abusive content from circulating.
Ofcom issues research about presentation of content settings
Ofcom has published research about how the presentation of sensitive content settings influence user choices. It tested the impact of different choice architectures – such as wording, layout, and the level of granularity – on the decisions users made regarding their sensitive content controls. The results provide strong evidence that the granular choice intervention was the most effective in promoting safer online behaviour, with more than seven in ten participants using content controls to reduce their exposure to sensitive content. In contrast, changes to wording had no impact, while visual cues increased uptake of safer settings but were sometimes perceived as less trustworthy. The effectiveness of content controls can vary across different user groups, with age, gender, and online habits shaping safety choices. So, what does this tell us? How organisations present online safety information matters, and it is important for platforms to consider how best to design effective safety tools that help unlock choice and empower users to take more control over the content they see online.
EU law
European Commission starts work on a code of practice on marking and labelling AI-generated content
The European Commission has started work on a code of practice on the marking and labelling of AI-generated content. Under the AI Act, content such as deepfakes and certain AI-generated text and other synthetic material must be clearly marked as such. This requirement reflects the growing difficulty in distinguishing AI-generated content from authentic, human-produced material. The AI Act sets out transparency requirements for providers and deployers of certain AI systems, including generative and interactive AI. These rules aim to reduce the risk of misinformation, fraud, impersonation, and consumer deception by fostering trust in the information ecosystem. This week the experts appointed by the European AI Office began the process and will draft the code over the next seven months. The upcoming code of practice on transparency of AI-generated content will be a voluntary instrument to help providers of generative AI systems effectively meet their transparency obligations. It will support the marking of AI-generated content, including synthetic audio, images, video and text, in machine-readable formats to enable detection. The code will also assist deployers using deepfakes or AI-generated content in clearly disclosing AI involvement, particularly when informing the public on matters of public interest. These obligations will become applicable in August 2026, complementing existing rules such as high-risk AI systems or general-purpose AI models.
European Data Protection Board consults on templates
The European Data Protection Board (EDPB) is developing a series of ready-to-use templates. It aims to provide practical tools that organisations can readily implement to meet their data protection obligations. To ensure these templates address the needs of organisations, the EDPB asking for feedback on which types of templates would be most beneficial (for example, a template for privacy notices or a template for records of processing activities). The EDPB is already working on templates for key GDPR requirements such as Data Protection Impact Assessments (DPIAs) and data breach notifications. The consultation ends on 3 December 2025.
EDPB adopts opinion on draft adequacy decision for Brazil
During its latest plenary, the EDPB adopted an opinion on the European Commission’s draft decision on the adequate level of protection of personal data in Brazil. In its opinion, requested by the Commission, the EDPB assesses whether the Brazilian data protection framework and the rules on government access to personal data transferred from Europe provide safeguards essentially equivalent to the ones in EU legislation. The Board positively notes the close alignment with EU legislation and case law. The EDPB also considers if the safeguards provided under Brazilian law are effective. The EDPB also invites the Commission to provide further clarifications and monitor certain areas in relation to Data Protection Impact Assessments (DPIA), the limitations on transparency related to commercial and industrial secrecy, and the rules on onward transfers. Generally, the Brazilian data protection law does not apply to data processed by Brazilian public authorities for the exclusive purposes of public safety, national defence, state security, or the investigation and prosecution of criminal offences. At the same time, the EDPB positively notes that Brazilian data protection law does apply in part to criminal investigations and maintenance of public order, under Brazilian case law. The EDPB says that the Commission should further specify the applicability of the Brazilian data protection law, as well as the Brazilian Data Protection Authority’s investigatory and corrective powers in relation to law enforcement authorities. Finally, the Board asks the Commission to further clarify the Brazil’s concept of national security.