The ICO has published a Tech Futures report on agentic AI, in which it sets out the ICO’s understanding of the emerging technology, including its potential uses and expected technical developments. The ICO shares its early thoughts about the data protection implications that organisations will have to consider as they explore the deployment of agentic AI, including data protection risks and opportunities. It also covers four possible scenarios to explore the uncertainty around how organisations might adopt agentic AI and how its capabilities might develop over the next two to five years.
It says that agentic artificial intelligence is evolving at pace, attracting scrutiny from innovators, technology adopters and regulators worldwide. As organisations consider deploying agentic AI, the ICO says that understanding its capabilities and the associated risks is essential.
Agentic AI combines the capabilities of generative AI with additional tools and new ways of interacting with the world. This increases the ability of AI systems to work with contextual information, operate using natural human language and automate more open-ended tasks. Agentic AI systems are being developed for use in research, coding, planning and transactions. Their potential applications span commerce, government, the workplace, cybersecurity, medicine and the consumer space. Many believe that agentic capabilities can form the foundation for powerful personal assistants.
While agentic AI offers some new technological capabilities, development is at an early stage, with many use cases unproven or at the development stage. The ICO is building an evidence base about where the technology is now; and how to exercise caution about the proven abilities of agentic AI while identifying and managing the data protection issues and risks related to supporting privacy-led innovation.
As developing agentic AI increases the potential for automation, organisations remain responsible for data protection compliance of the agentic AI they develop, deploy or integrate in their systems and processes.
The ICO has already carried out a series of consultations on generative AI which covered the many issues that agentic AI shares. Novel agentic AI data protection risks include:
- issues around determining controller and processor responsibilities through the agentic AI supply chain;
- rapid automation of increasingly complex tasks resulting in a larger amount of automated decision-making;
- purposes for agentic processing of personal information being set too broadly to allow for open-ended tasks and general-purpose agents;
- agentic systems processing personal information beyond what is necessary to achieve instructions or aims;
- potential unintended use or inference of special category data;
- increased complexity affecting transparency and the ease with which people can exercise their information rights;
- new threats to cyber security resulting from the nature of agentic AI; and
- concentration of personal information facilitating personal assistant agents.
One of the ICO’s initial findings is that the specific design and architecture of agentic systems have an impact on how data protection law applies and how people exercise their data protection rights. It says that choices such as the data and tools that a system can access and which governance and control measures to put in place really matter.
Poorly implemented agentic systems will increase the risks of data protection harms. For example, this could include systems that:
- have no clear purpose;
- are connected to databases not needed for their tasks; or
- have no measures in place to secure access, monitor or stop activity, or control the further sharing of information.
The importance of design and architecture also means that there are good opportunities for privacy by design and privacy-friendly innovation in agentic AI, and organisations should use them for responsible deployment. The ICO says that it is already seeing some features and tools intended to address privacy issues.
In addition, it has identified innovation opportunities with agentic AI that have the potential to support data protection and information rights and contribute to privacy-positive outcomes. Potential areas include:
- data protection compliant agents;
- agentic controls;
- privacy management agents;
- information governance agents; and
- ways to benchmark and evaluate agentic systems.
The ICO is developing a statutory code on AI and ADM with implications for agentic AI. It will also start updating guidance on ADM and profiling, in light of the Data (Use and Access) Act, with public consultations in 2026.
It also highlights that the Digital Regulation Cooperation Forum has recently announced the launch of a Thematic Innovation Hub offering tailored engagement and regulatory advice on priority topics, the first focus of which will be agentic AI. The intention is to develop the regulators’ collective understanding of how one another’s regulatory regimes might apply to AI, and work to identify and resolve any points of conflict.