As part of the ICO’s AI and biometrics strategy, which it published in June 2025, it made the use of automated decision-making (ADM) in recruitment a key regulatory focus. It said that it would:
- scrutinise major employers’ and recruitment platforms’ use of ADM in recruitment to identify risks related to transparency; discrimination; and fixing misuse;
- publish findings and regulatory expectations; and
- hold employers to account if they fail to respect people’s information rights.
It has now published a report setting out its findings to date and its regulatory expectations based on those findings. The report is based on evidence gathered from over 30 employers that voluntarily talked to the ICO between March 2025 and January 2026. It says that it has seen many examples of how employers use automation to make or support recruitment decisions.
The ICO is of the view that automated recruitment tools have a role to play in helping candidates and employers alike. For example, they can enable employers to process high volumes of applications consistently and quickly. It supports innovation in using new tools. However, the ICO also recognises the risks these tools can pose to people. Reducing these risks allows candidates and employers to have a better mutual understanding of the benefits and risks of ADM in recruitment, and therefore greater trust in how these technologies are used.
The ICO’s key finding is that many employers engaging in automated recruitment are likely to be relying on solely automated decisions as part of this process. This means they’re using solely automated systems without meaningful human involvement, and the decisions these systems take have legal or similarly significant effects on people. This places these decisions within the scope of the provisions on solely automated decision-making in the UK GDPR. For this reason, a greater range of safeguards will need to apply than the ICO’s evidence suggests are currently in place.
The ICO is calling for employers to make sure that they apply data protection law correctly as they adopt new practices and technologies. If organisations wish to use ADM in recruitment processes, it expects them to:
- Proactively monitor for bias: organisations need to work hard to build trust with people when there are engrained concerns about bias and discrimination. They also need to test regularly for biased outputs and take steps to mitigate this, so people can trust that all decisions are fair. Good practice also includes asking developers about their own bias testing when procuring tools and considering monthly bias reviews.
- Be transparent with jobseekers: organisations need to be clear with candidates if ADM is being used and explain how it works.
- Explain rights to recourse: organisations must tell candidates how to exercise their right to challenge a decision and request a human review if they believe it is incorrect.
It has also updated its guidance on ADM and profiling, which is now open to consultation until 29 May 2026.
The ICO’s views accord with those of the CMA, which recently published similar guidance on using (agentic) AI in a consumer context and raised similar issues about bias and transparency.