This monthly publication is designed for the wider tech‑law community, offering concise, accessible insights into the issues shaping our sector.
Each edition will include short pieces on topics such as:
- Recent deals and market developments
- Key legal updates in technology and AI
- Emerging trends across the tech industry
We welcome contributions from trainees, junior lawyers, and anyone across the tech‑law community who is keen to get involved.
For more information or to express interest in contributing, please contact hello@scl.org.
Deepfakes, Disclosures and Deadlines: Engineering AI Transparency Before 2 August 2026
On 17 December 2025, the European Commission published a first draft Code of Practice on Transparency of AI-Generated Content, intended to support compliance with the content-related elements of the EU AI Act’s Article 50 transparency obligations. The Commission’s timetable is compressed: feedback on the first draft closed on 23 January 2026, a second draft is due mid-March 2026, and the Code is expected to be finalised by June 2026. The underlying transparency rules become applicable on 2 August 2026.
The draft Code’s core message is practical: transparency cannot be treated as a one-off label added at the point of generation. It needs to hold up when content is shared, reformatted, edited, compressed, or re-uploaded. In other words, transparency needs to survive the real-world content lifecycle—because approaches that only work within your product are likely to break once outputs leave your environment.
What the draft Code is trying to standardise
The Commission describes the draft Code as having two sections, with different responsibilities across the value chain:
- Marking and detecting AI content (providers): measures aimed at making outputs machine-readable and detectable as artificially generated or manipulated.
- Labelling deepfakes and certain public-interest text (deployers): measures aimed at ensuring people are clearly informed when they are seeing (i) a deepfake, or (ii) certain AI-generated or manipulated text published on matters of public interest.
In practice, many organisations will sit in both categories – offering tools that generate or manipulate content while also publishing that content or enabling others to publish it. That is why the split matters most at the hand-off: a company may implement technical marking, but still deliver a user experience where disclosures are inconsistent, easy to miss, or lost downstream.
The uncomfortable reality: “perfect labelling” is not the standard
The draft Code implicitly recognises a simple constraint: you cannot “solve” transparency with a single technology (such as a watermark) and assume the job is done. In live distribution environments:
- Watermarks can be degraded or removed through cropping, compression, or re-encoding.
- Metadata can be stripped by platform pipelines, screenshots, or copy/paste.
- Detection can be undermined through deliberate manipulation, remixing, or model-mixing.
The compliance question is therefore less “is every output always detectable?” and more: have you adopted reasonable, repeatable measures aligned to your key use cases, and can you show you designed for predictable failure modes?
What “good” looks like in practice
Stripped of legal framing, the draft Code points towards four characteristics of a credible transparency posture:
- Layering: more than one transparency mechanism, because any single method will fail in some flows.
- Consistency: aligned signals across UI, exports, and APIs so downstream systems can preserve transparency.
- Prominence: disclosures that are hard to miss in high-risk contexts (deepfakes; public-interest communications).
- Evidence: documentation of design choices, testing, and what happens when transparency mechanisms break.
A practical way to act without building a bureaucracy
Most organisations will not need an expansive, multi-workstream compliance programme to start making progress. A sensible initial approach is to focus on three deliverables, and then iterate as the Code develops and products mature:
- A scope map: identify where AI-generated or manipulated content is created, edited, exported, and published – paying particular attention to the transformations most likely to strip or degrade transparency signals.
- A transparency standard: a short internal specification that defines (a) what gets marked, (b) what requires disclosure, (c) how and where disclosures appear (in-product and on export), and (d) what transparency signals are passed through APIs to downstream users.
- A resilience test: a lightweight test suite that checks whether your transparency signals survive common formats and transformations (e.g., compression, cropping, re-encoding, reposting). Where they do not, you have a structured basis for adding a second layer (for example, pairing metadata with visible disclosure in relevant contexts).
Takeaway
Treat the draft Code as an early indication of what regulators are likely to view as a credible transparency posture by 2 August 2026: not simply “we added a label”, but “we engineered transparency across the content lifecycle.” If your approach does not survive export and redistribution, it is unlikely to survive scrutiny.
Safwan Akbar is a Trainee Solicitor at Morrison Foerster and a member of the SCL Trainee Group
Levelling the Playing Field – GenAI and Gaming
Game developers come in all shapes and sizes. From first party developers (Nintendo, Sony, Xbox) and third-party heavyweights (EA or Ubisoft) to independent or ‘indie’ developers shipping from a single laptop. Traditionally, market influence has scaled with budget and headcount. However, the advent of AI, and in particular, generative AI, is levelling the playing field. Solo developers are 3x more likely to use AI art tools than larger teams.
But in the UK and across Europe, the promise of GenAI will, in part, depend on how three legal fronts settle:
- data protection concerns relating to AI training;
- copyright issues at both dataset and model layers; and
- application of the EU AI Act that will increasingly shape the AI tools that game teams rely on.
This blog update looks at the legal considerations in play and how the regulatory landscape may unfold.
Data Protection
Europe’s most concrete signal to date has come from Germany. In May 2025 the Higher Regional Court of Cologne refused to block Meta’s plan to use publicly accessible Facebook/Instagram posts to train AI models. The court accepted legitimateinterests under the GDPR where users were informed and could object. The court also saw no breach of the Digital Markets Act on the facts. Whilst this was an interim‑relief decision, it shows that legitimate interest‑based training on public data, with transparency and objections, can pass muster.
The European Data Protection Board’s late‑2024 opinion pushed in the same direction but set a high bar. Controllers must show strict necessity and pass a balancing test, and DPAs will scrutinise claims that models are “anonymous.” In the UK, the ICO adopts a comparable stance. Legitimate interests remains the only practical basis for current scraping, but the ICO expects developers to justify why scraping is necessary over licensed sources and to reduce “invisible” processing through clear transparency and workable opt‑outs.
Copyright
Despite the opportunity AI presents, 42% of indie developers cite “copyright infringement” as their primary hesitation when using AI. Do they have cause for concern?
The UK’s headline case, Getty Images v Stability AI didn’t settle whether English law treats model training on copyright works as infringement, but it did indicate how risk should be managed. It held that an AI model can count as an “article” for UK secondary‑infringement purposes, however, as established in the case, they aren’t necessarily articles. The court also found only limited trademark infringement tied to outputs that reproduced Getty’s marks/watermarks.
The immediate read‑across is that tight filtering, deduplication and watermark/brand‑suppression on outputs materially reduce exposure to IP claims, and good provenance controls help demonstrate mitigation. At the same time, because the court didn’t rule on UK training liability, that question remains open for a better‑framed case focused on UK‑based acts of copying.
EU AI Act
On the game surface itself, Article 50 of the EU AI Act will require clear signalling when players interact with an AI system. The Commission’s second draft Transparency Code takes a multilayer approach. It covers metadata, imperceptible watermarking, and, where necessary, fingerprinting/logging, plus detectors and visible labels for deepfakes and AI‑generated text used to inform the public. The UK has not enacted an EU‑style horizontal AI law and continues to take a principles‑based, regulator‑led approach, with practical playbooks rather than prescriptive duties.
The implementation of the Act remains to be seen, but the signalling requirements may influence how different studios approach AI. Some indie teams may find the additional transparency steps resource‑intensive. Equally, implementing these measures could reassure players and rightsholders by providing greater clarity around synthetic content. Overall, the impact is likely to vary depending on the tools developers use, the scale of their workflows, and how the Commission refines guidance in practice.
Next steps
For UK and EU developers alike, the direction of travel seems consistent. Teams relying on AI, whether for assets, prototyping or localisation, should track how data protection, copyright, and AI‑specific regimes converge. A practical starting point includes;
- auditing the source of datasets and tools;
- implementing transparency measures that can scale as rules become clear; and
- engaging early with vendors to understand how they intend to meet EU AI Act duties.
As the regulatory landscape settles over the next 18–24 months, studios that build compliance into their pipelines now will be better placed to harness GenAI’s advantages without inheriting avoidable risk.
Avi Marcus is a Trainee Solicitor at DLA Piper and a member of the SCL Trainee Group
Blue Links, not Black Boxes – Competition Investigation opened into Google’s AI Summaries
If you type a search query into Google (or indeed, most major search engines), you will see at the top of the page an “AI Overview” box, with the usual blue links to individual websites (sometimes called “organic results”) following after.

Screenshot: AI Overview’s response to search query “holiday destinations in February”
On 9 December 2025, the European Commission opened a formal antitrust investigation to assess, among others, whether Google’s use of web publishers’ content to provide AI summaries could constitute an abuse of dominance under Article 102 TFEU. In particular, the Commission was concerned that Google may have been doing so (1) without providing appropriate compensation to publishers, and (2) without offering the possibility to refuse such use of their content without losing access to Google Search.
Anticompetitive effects / pro-competitive justifications
Several studies point to likely anticompetitive effects of this behaviour. For example, one recent study by the Pew Research Centre suggested that people only clicked a link once in every 100 searches when there was an AI summary at the top of the page. A separate study by Bain & Company notes that 80% of consumers now rely on AI-written results, with a corresponding reduction in organic web traffic by 15-25%. Publishers like the Daily Mail have also claimed that the number of people who visit its website from Google Search results have fell by about 50% since Google introduced its AI overview feature.
This represents a concern for web publishers or content creators who may rely on Google Search for a substantial amount of web traffic (and hence ad revenue, user visibility etc). This is especially concerning, it is argued, if this comes as a result of the use of publishers’ / creators’ works to train the AI summary model, without options to opt-out without losing access to Google Search.
Pro-competitive justifications will likely be offered in response – e.g. that AI summaries improve the functionality or value of Google Search, that dong so is indeed pro-competitive as it results in better consumer outcomes and welfare, or perhaps point to the fact that Google’s AI summaries attempt to link the user to original source webpages. It would also need to be shown that any possible justifications could not have been achieved through less restrictive alternative means.
Big tech antitrust cases, other copyright issues
We are unlikely to see the results of the Commission’s investigation in the immediate future, and any possible litigation, subsequent appeals, and follow-on claims will likely take many years to resolve. In the meantime however, it is worth noting that big tech players are certainly familiar with competition law investigations into such conduct – for a nostalgic example see the Commission’s decision to fine Microsoft for tying the Internet Explorer browser to Windows. A proper assessment in each case of what is likely to amount to anti-competitive behaviour will involve fact-intensive exercises, requiring very significant amounts of evidence, which can only be properly evaluated in court if the Commission decides to issue a case following the investigation.
Readers following the AI regulation space will also notice that Google’s behaviour raises parallel but distinct issues with respect to copyright law. Similar arguments regarding compensation and opt-out provisions have been run with respect to unauthorised AI training and copyright law (see e.g. the UK Government’s ongoing AI / IP consultation, or recent High Court judgment in Getty Images v Stability AI). A very substantive amount of commentary surrounding these issue has been written – and as regulators catch up with enforcement measures we get ever-closer to seeing how the hammer will fall.
Solomon Chann is a Trainee Solicitor at Bristows and a member of the SCL Trainee Group