UK law
ICO issues reprimand to Greater Manchester Police over CCTV failings
The ICO has issued a reprimand to Greater Manchester Police (GMP) following failures in its storage and handling of CCTV footage. A person was held in custody for 48 hours in February 2021. During this period, a CCTV system was in operation. GMP’s Professional Standards Directorate (PSD) later submitted an internal request to retain this information beyond the typical 90-day period. When responding to a subsequent related subject access request, the force later realised two hours of the footage was missing. GMP states that, despite all attempts, it is unable to recover the missing two hours of footage. This led GMP to self-reporting a personal data breach to the ICO on 5 September 2023. The ICO’s investigation assessed GMP’s compliance with data protection laws related to the storage of CCTV footage. It ruled that GMP has failed to provide the complainant with their personal data, both without undue delay and by the end of the applicable period of one month, and failed to ensure that the appropriate technical or organisational measures were in place to protect the accidental loss of the CCTV data it was processing. The ICO’s investigation found two key failings in GMP’s data protection practices: a misunderstanding between GMP staff, with regards to the responsibility to conduct a quality check of the retained footage; and a lack of policies and guidelines within GMP to identify that quality checks were required or who is responsible for this task. In the time since the incident, GMP has taken remedial action, including implementing clearer retention policies for CCTV footage; proactive investment in its surveillance and security system infrastructure in 2023, resulting in a significant upgrade in its system capabilities, strengthening internal oversight and governance measures to prevent similar incidents in the future; introducing a strictly regulated process to ensure that only authorised force personnel have access to the footage held within the CCTV server. There is an ongoing investigation into the wider case by the Independent Office of Police Conduct.
New ETSI standard protects AI systems from evolving cyber threats
The NCSC has published guidance on transparent, high-level principles and provisions for securing AI. It includes a technical specification on “Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems” and an accompanying technical report that helps stakeholders implement the cyber security provisions in the above specification, including examples mapped to various international frameworks. It aims to provide stakeholders in the AI supply chain (including developers, suppliers, integrators and operators) with a robust set of baseline security requirements that help protect AI systems from evolving cyber threats. These stakeholders may include a diverse range of entities, including large enterprises and government departments, independent developers, small and medium enterprises, charities, local authorities and other non-profit organisations. The guidance will also be useful for anyone planning to purchase AI services. The specification is the first global standard that sets minimum security requirements across the entire AI life cycle for all stakeholders in the AI supply chain. More specifically, it aims to allow developers of AI systems to demonstrate to prospects and partners that they are adhering to a framework created by cross-disciplinary collaboration, with requirements that are both globally relevant and practically implementable. The standard defines 13 core security principles grouped into five key stages within the AI system development life cycle. These are: secure design, secure development, secure deployment, secure maintenance, and secure end of life. Considering security requirements at all stages of the development life cycle prevents costly redesigns later, and safeguards customers and their data in the near term. The next step is to progress towards a European standard in conjunction with other European and international standards bodies.
CAP clarifies AI disclosure requirements in UK advertising regulation
The Committee of Advertising Practice (CAP) has issued a guidance note which describes the current regulatory position on artificial intelligence disclosure in UK advertising. It notes the EU’s AI Act but confirms that there is no general requirement under UK law to disclose if AI has been used in advertisements. CAP reiterates that the rules in the CAP and BCAP advertising codes apply to AI-generated content and it highlights the 12 guiding principles developed by the Incorporated Society of British Advertisers and Institute of Practitioners in Advertising which recommend transparency where AI features prominently and is not obvious to consumers. It also says advertisers should ask themselves Is the audience likely to be misled if the use of AI is not disclosed. And if there is a danger of the audience being misled, they should also consider if the disclosure clarifying the ad’s message or contradicting it.
EU law
European Commission seeks views on the use of data to develop AI
The European Commission is seeking views on the use of data in AI, on simplifying the rules that apply to data and on international data flows to inform the forthcoming Data Union Strategy. The Data Union Strategy aims to help the EU build high-quality, interoperable, and diverse datasets that are necessary for AI. The strategy also aims to ensure coherence between policies, infrastructures, and legal instruments on data. It will build on existing work to enable trusted cross-border data flows, support common data spaces and their links with the AI ecosystem, and ensure trust in data sharing. The consultation ends on 18 July.
Irish DPC issues statement on Meta AI
Over the past two years, the Irish Data Protection Commission (DPC) has been working closely with leading technology companies regarding compliance with GDPR in the context of AI developments, particularly when it comes to using personal data to train Large Language Models in the EU/EEA. In March 2024, Meta told the DPC about its plans to train its Large Language Model using public content from Facebook and Instagram. The DPC identified concerns with Meta’s proposal, which led Meta to pause its training plans in June 2024. The DPC sought a formal GDPR Opinion from the European Data Protection Board. The Opinion, issued in December 2024, provided criteria for assessing compliance in AI model training and deployment. Following the Opinion, Meta updated its proposal and documentation, which the DPC reviewed. Meta implemented several measures to protect data subjects, including updated transparency notices, an easier-to-use Objection Form, longer notice periods, and improved data protection measures such as de-identification and filtering. The DPC continues to monitor Meta’s compliance and has required Meta to compile a report on the efficacy of its measures, expected in October 2025. Users have been informed about how to object to the processing of their public posts and how AI training affects their personal data. The DPC advises users to regularly review their privacy settings. The DPC says that it remains committed to ensuring responsible innovation and protecting individuals’ data rights while balancing these against companies’ interests. It aims to regulate fairly and consistently across the EU/EEA.