First Tier Tribunal considers FOI request regarding use of generative AI tools by HMRC

September 15, 2025

The First-Tier Tribunal recently issued its ruling in Thomas Elsbury v Information Commissioner & HMRC Neutral citation: [2025] UKFTT 00915 (GRC). The Tribunal was asked to consider a request for information under the Freedom of Information Act 2000 (FOIA) about the use of large language models (LLMs) by HMRC.

Thomas Elsbury (E) specialises in R&D tax relief claims. In December 2023 he submitted a detailed FOI request seeking information about HMRC’s use of LLMs such as ChatGPT by the R&D Tax Credits Compliance Team. HMRC initially confirmed it held relevant information, but withheld it under section 31(1)(d) of the FOIA (prejudice to the assessment or collection of tax). It then carried out a review maintaining that stance.  When E appealed to the ICO, HMRC changed its mind saying that it should instead have issued an “neither confirm nor deny” response under section 31(3), again citing section 31(1)(d).

The ICO accepted the HMRC’s new position, concluding that confirmation or denial would “assist those intent on defrauding the system”. E wasn’t happy about that outcome and appealed on three grounds: misapplication of the prejudice test; an erroneous public-interest balancing exercise; and serious procedural flaws.

The Tribunal’s findings

The Tribunal held that once a public authority has unequivocally confirmed it holds information, it is “like trying to force the genie back in its bottle” to convert that into an NCND stance. Section 31(3) could not be invoked retrospectively to undo the original confirmation.

HMRC provided neither evidence of a causal link between mere confirmation or denial of LLM usage and any increased fraud risk; nor a “real, actual or of substance” likelihood of prejudice.

The Tribunal therefore found that the ICO’s decision to uphold the NCND response was an error of law. It said that even if section 31(3) had been validly engaged, the Tribunal would have overturned the Commissioner’s decision. It accorded “considerable force” to E’s arguments that:

  • AI accountability: Government use of generative AI in core tax functions raises acute transparency and fairness issues
  • Chilling effect on legitimate claims: Evidence suggested HMRC’s opaque approach was discouraging genuine R&D investment and undermining policy objectives.
  • Trust and confidence: Mysterious AI-flavoured letters with American spellings and inconsistent reasoning were eroding taxpayer confidence.

The Tribunal found these factors outweighed the speculative prejudice advanced by HMRC. As a result, HMRC must, within 35 working days (subject to any appeal), confirm whether it holds the requested information and either disclose it or issue a section 17 refusal notice citing any other applicable exemption.

The ICO has issued guidance which sets out the requirements for accountability and transparency when deploying AI.