ICO launches guidance for data protection and AI

August 2, 2020

The ICO has launched its guidance on AI and data protection. It sets out best practice for data protection-compliant AI, as well as how the ICO interprets data protection law as it applies to AI systems that process personal data. The guidance is not a statutory code.

The ICO says that applications of AI increasingly permeate many aspects of our lives. It understands the distinct benefits that AI can bring, but also the risks it can pose to the rights and freedoms of individuals.

Consequently, the ICO has developed a framework for auditing AI, focusing on best practices for data protection compliance – whether organisations design their own AI systems, or implement one from a third party. It provides a clear methodology to audit AI applications and ensure that organisations process personal data fairly. It comprises:

  • auditing tools and procedures that the ICO will use in audits and investigations;
  • detailed guidance on AI and data protection; and
  • a forthcoming toolkit designed to provide practical support to organisations auditing the compliance of their own AI systems.

The guidance is aimed at two audiences:

  • those with a compliance focus, such as data protection officers, general counsel, risk managers, senior management, and the ICO’s own auditors; and
  • technology specialists, including machine learning experts, data scientists, software developers and engineers, and cybersecurity and IT risk managers.

The guidance clarifies how organisations can assess the risks to rights and freedoms that AI can pose from a data protection perspective; and the appropriate measures that can be implemented to mitigate them.

While data protection and ‘AI ethics’ overlap, the guidance does not provide generic ethical or design principles for using AI. It corresponds to data protection principles, and is structured as follows:

  • part one addresses accountability and governance in AI, including data protection impact assessments (DPIAs);
  • part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance, and mitigating potential discrimination;
  • part three addresses data minimisation and security; and
  • part four covers compliance with individual rights, including rights related to automated decision-making.

The accountability principle makes organisations responsible for complying with data protection and for demonstrating that compliance in any AI system. In an AI context, accountability requires organisations to:

  • be responsible for the compliance of their systems;
  • assess and mitigate its risks; and
  • document and demonstrate how a system is compliant and justify the choices made.

Organisations should consider these issues as part of a DPIA for any system they intend to use. They should note that, in the majority of cases, they are legally required to complete a DPIA if they use AI systems that process personal data. DPIAs provide an opportunity to consider how and why an organisation is using AI systems to process personal data and what the potential risks could be.

It is also important to identify and understand controller/processor relationships. This is due to the complexity and mutual dependency of the various kinds of processing typically involved in AI supply chains.

As part of striking the required balance between the right to data protection and other fundamental rights in the context of an organisation’s AI systems, a range of competing considerations and interests need to be considered. During the design stage, these need to be identified and assessed. Organisations should then determine how they can manage them in the context of the purposes of their processing and the risks it poses to the rights and freedoms of individuals. However, they should note that if their AI system processes personal data they always have to comply with the fundamental data protection principles, and cannot ‘trade’ this requirement away.

When AI is used to process personal data, it must be lawful, fair and transparent. Compliance with these principles may be challenging in an AI context.

AI systems can exacerbate known security risks and make them more difficult to manage. They also present challenges for compliance with the data minimisation principle.

Two security risks that AI can increase are the potential for:

  • loss or misuse of the large amounts of personal data often required to train AI systems; and
  • software vulnerabilities to be introduced as a result of the introduction of new AI-related code and infrastructure.

By default, the standard practices for developing and deploying AI involve processing large amounts of data. There is a risk that this fails to comply with the data minimisation principle. A number of techniques exist which help both data minimisation and effective AI development and deployment.

The way AI systems are developed and deployed means that personal data is often managed and processed in unusual ways. This may make it harder to understand when and how individual rights apply to this data, and more challenging to implement effective mechanisms for individuals to exercise those rights.