AI and Data Protection: The ICO Guidance (1)

October 20, 2020

In this first instalment, Quentin looks at the background to the Guidance and its treatment of, the principles of accountability and transparency. In the next instalment Quentin will review the approaches in the Guidance to the principle of data minimization and with respect to individual rights, as well as providing concluding thoughts.

——————-

The Information Commissioner’s Office recently published its Guidance on Artificial Intelligence and personal data protection.

The Guidance is available for download from the ICO website so rather than provide a detailed review of the Guidance, this article gives a high-level overview, discusses remaining challenges and identifies those specific areas relevant to AI where further ICO guidance and toolkits are anticipated.

Managing Trade-offs
The Guidance outlines a risk based approach, involving managing trade-offs, that will require common sense application of data protection law principles to the development and deployment of those AI systems that process personal data.

Background

Although perhaps not as prevalent as deterministic algorithm based technologies, Artificial Intelligence (‘AI’) systems are increasingly pervasive in modern economies, across nearly every industrial sector. The European Commission’s Communication on AI for Europe,1 for example, says that AI systems have “the potential to transform our world, from improving healthcare, reducing energy consumption and predicting climate change to credit scoring and fraud detection.”

Of course, whilst many AI systems will process personal data (and, in many cases, in very large quantities) not every AI system does so. Those AI systems that do not process personal data fall outside the ambit of the Guidance. However, for those AI systems that do process personal data, the Executive Summary of the Guidance notes that, 

“The way AI systems are developed and deployed means that personal data is often managed and processed in unusual ways. This may make it harder to understand when and how individual rights apply to this data, and more challenging to implement effective mechanisms for individuals to exercise those rights”.  

The Guidance outlines a risk based approach, involving managing trade-offs, that requires common sense application of data protection law principles to the development and deployment of those AI systems that process personal data.

The development and deployment of AI is rapidly evolving. This means that, whilst helpful and generally pragmatic, the guidance is not likely to be the last guidance in relation to AI from the ICO, which plans to launch a ‘toolkit’ to help organisations audit the compliance of their AI systems, to regularly update the Guidance and to review related guidance (such as the Cloud Computing Guidance).

The wider relevance of the Guidance
The Guidance concerns only those AI systems that process personal data. Nonetheless, the approaches and principles outlined in the Guidance are likely to also be relevant to many leading-edge software development and deployment projects that process personal data but do not engage AI or Machine Learning approaches.

The Guidance

The guidance forms one of the three pillars of the ICO’s tripartite framework for auditing AI, the AI audit framework consisting of (i) the Guidance; (ii) auditing tools and procedures that the ICO will use in audits and investigations; and (iii) a forthcoming ‘toolkit’ that will aim to provide further practical support to enable organisations self-audit the compliance of their AI systems. 

As well as complementing the general ICO Guidance, such as guidance in relation to consent, the Guidance works alongside  other ICO guidance particularly relevant to AI, including “Explaining Decisions made with AI” published in 20202 and the “Big Data, AI, and Machine Learning” report, first published by the ICO in 2014 and updated in 2017.

The corpus of ICO guidance
The Guidance should, naturally, be read in conjunction with other relevant ICO guidance and forms part of the corpus of relevant ICO materials including, for example, guidance on Consent, on Big Data, AI and Machine Learning and on Explaining Decisions made with AI.

The guidance is aimed at two primary audiences. First, persons with a compliance role (such as data protection officers and general counsel in corporations) and second, technology specialists (such as software developers and cyber security managers in industry).

Structure

Unsurprisingly, the structure of the guidance corresponds with the data protection principles found in the General Data Protection Regulation (‘GDPR’) and the Data Protection Act 2018 (‘DPA’) both of which regulate the collection and use of information about identified or identifiable individuals (‘personal data’). 

Part one of the Guidance concerns accountability and governance in AI, including data protection impact assessments (DPIAs); Part two concerns fair, lawful and transparent processing; Part three addresses data minimisation and security; and Part four addresses compliance with individual rights, including specific rights related to automated decision-making.  

Part 1: Accountability and governance

The principles of accountability and governance oblige data controllers to comply with and demonstrate compliance with the GDPR’s other key data protection principles (set out in GDPR Article 5(1)). 

The Guidance notes that demonstrating how the complexities of AI systems have been addressed is an important element of accountability, whilst noting that delegation of these issues to data scientists or engineering teams is insufficient in that senior management and Data Protection Officers should be responsible for understanding and addressing compliance.

At several points the guidance recognises that compliance efforts should be proportionate, for example: 

“Your governance and risk management capabilities need to be proportionate to your use of AI. This is particularly true now while AI adoption is still in its initial stages, and the technology itself, as well as the associated laws, regulations, governance and risk management best practices are still developing quickly”. 

Part 1 of the guidance has sections dealing with AI-specific implications of accountability, including:

  • justifying decisions to use AI systems;
  • assessing whether entities involved in the development and deployment of AI systems are controllers or processors (together with the responsibilities arising); 
  • undertaking data protection impact assessments in the context of AI systems;
  • assessing risks to the rights and freedoms of individuals, and how to address these risks

Importantly, the guidance outlines how Data Protection Impact Assessments (DIPAs)3 will be required in the majority of AI use cases, with a requirement to consult with the ICO being triggered prior to the start of processing where a DIPA indicates a residual high risk to individuals that cannot be sufficiently reduced.

It is worth recalling that Article 35(3)(a) of the GDPR requires a DPIA where use of AI involves:

  • systematic and extensive evaluation of personal aspects based on automated processing, including profiling, on which decisions are made that produce legal or similarly significant effects;
  • large-scale processing of special categories of personal data; or
  • systematic monitoring of publicly-accessible areas on a large scale.

And the guidance notes that AI can also involve operations that are inherently likely to result in a high risk, such as “…use of new technologies or novel application of existing technologies, data matching, invisible processing, and tracking of location or behaviour”.

Controller/Processor
The Guidance recognises that determining whether an organisation is a controller or a processor in relation to personal data can be complex.

The ICO’s separate guidance on controllers and processors may be helpful in making this decision.

Forthcoming ICO guidance on cloud computing is expected to address remaining questions around processor and controller identifications, which can be particularly challenging to make in the context of AI systems deployed in cloud computing scenarios.

Note also that DIPIAs are required before the processing of personal data starts and that they  are ‘living’ documents, which should be regularly reviewed. For example, ‘concept drift’ may occur when the demographic or behaviours of a target population change over time, affecting the accuracy of an AI system.

Controller / processor relationships in AI

Determining whether an organisation is a controller or a processor is complex. The ICO has provided separate detailed guidance on this topic4 of which the key points are:

  • Controllers decide what personal data to process and for what purposes; 
  • Processors act on instructions in relation to processing personal data (although they may make technical decision about how they process those data);
  • It is important to both assess and document the controller/processor status of each organisation involved in processing personal data.

In the context of AI systems, including those where processing happens in the cloud, assigning controller/processor relationships can be especially complex and the ICO plans separate guidance to address these complexities. Example scenarios where an entity could become a controller include where the entity takes decisions in relation to:

  • Model parameters (for example,  how complex a decision tree can be, or how many models will be included in an ensemble); 
  • Evaluation metrics and loss functions, such as trade-off between false positives and false negatives;
  • Continuous testing and updating of models (including using what kinds of data).

As these few examples demonstrate, the situations in which organisations involved in developing and deploying AI could (perhaps inadvertently) become personal data controllers are legion.

Managing competing interests

The guidance expressly acknowledges the tension between interests that exist in relation to AI systems, including:

  • Interests in training an accurate AI system (statistical accuracy) vs interests in reducing the volume of personal data required for system training (data minimisation);
  • Interests in achieving statistical accuracy, security and commercial confidentiality vs interests in AI systems being readily explicable and not opaque.

 ‘Trade-offs’ will be necessary and are a matter of judgment: “The right balance depends on the specific sectoral and social context you operate in, and the impact the processing may have on individuals.”

Outsourcing

As AI systems become more widely deployed, some commentators point to a change in focus from development of AI systems in-house, typically within major corporates, to businesses purchasing, licensing-in or leasing AI solutions from external suppliers.

Unsurprisingly, the Guidance recommends conducting an independent evaluation of trade-offs as part of a due diligence process and that system requirements are specified at the procurement stage. Recital 78 of the GDPR is relevant in this respect as it encourages those developing AI systems to take into account the right to data protection when developing and designing systems; and to seek to ensure that relevant controllers and processors are able to fulfil their data protection obligations.

Part 2: Fair, transparent and lawful

As many readers will be aware, the personal data law principles of lawfulness, fairness and transparency can pose particular challenges for those developing and deploying AI systems.  

By their nature, the processes of developing and deploying AI systems involve processing personal data in diverse ways for different purposes. Naturally, there should be a lawful basis for each particular purpose. The Guidance does not go into great detail in relation to the potential lawful bases of processing, although consent is considered briefly (and the reader is pointed to the ICO’s more detailed guidance specifically on consent, which should be genuinely freely given, specific, in-formed and unambiguous). It also notes that it seems unlikely that vital interests could justify pro-cessing of personal data for the purposes of training an AI system.  

The Equality Act 2010 and AI
Entities developing and deploying AI systems will also need to consider carefully whether their systems are compliant with the EA2010. As the Guidance notes: “Demonstrating that an AI system is not unlawfully discriminatory under the EA2010 is a complex task, but it is separate and additional to your obligations relating to discrimination under data protection law. Compliance with one will not guarantee compliance with the other.”

The legitimate interests basis for processing is frequently relied on by AI developers and deployers, with the Guidance stressing the ‘three part test’ namely, the need to:

  • identify a legitimate interest (the ‘purpose test’);
  • show that the processing is necessary to achieve it (the ‘necessity test’); and
  • balance against the individual’s interests, rights and freedoms (the ‘balancing test’).

Fairness, in the context of AI, is important to ensure the accuracy of the AI system, avoid discrimination and that the reasonable expectations of affected individuals are considered. The Guidance contains a useful discussion of statistical accuracy, noting the trade-offs between precision and recall (or sensitivity) which themselves can be measured using statistical techniques, together with a useful outline of how to manage the risk of discrimination in an AI system, though stopping short of giving guidance in relation to demonstrating that an AI system is not unlawfully discriminatory within the meaning of the Equality Act 2010 (‘EA2020’).

NIS incidents
Where the target organisation in a cyber attack is a relevant digital service provider, the attack may be a Network and Information Systems Directive 2018 (‘NIS’) incident, even where an adversarial attack does not involve personal data.

The ICO is the competent authority for relevant digital service providers under the NIS; and the ICO’s guide to the NIS has further information.

One use of special category data is to assess discrimination in AI systems, such as when a dataset containing personal data of individuals with protected characteristics under the EA2010 is used to assess how the system performs in relation to each protected group; and the Guidance outlines the need for a lawful basis under Article 6 of the GDPR, to meet one of the conditions in Article 9 of the GDPR and how some types of special category data require additional basis or authorisation, found in Schedule 1 of the Data Protection Act 2018.

Finally AI systems can be notoriously difficult to understand and explain. The ICO have published specific guidance on ExplAIning AI, a topic I covered in an article for Computers & Law last month with my colleague, Terence Bergin QC. 

GDPR Article 22
Article 22 of the GDPR contains additional rules designed to protect individuals where solely automated decision making that has legal or similarly significant effects on them is undertaken. Solely automated decision making is only permissible where it is:

  • necessary for the entry into or performance of a contract; or
  • authorised by law; or
  • based on the individual’s explicit consent.

Where processing does fall under Article 22, information should be given to individuals about the processing, the affected individuals should be able to request human intervention or challenge a decision; and the AI deployer should carry out regular checks to make sure relevant AI systems are working as intended.

The second article in this series will be published in early November.

Notes and sources

1 Available for download, here: https://ec.europa.eu/digital-single-market/en/news/communication-arti-ficial-intelligence-europe

2 For an article discussing this guidance, see ExplAIning Artificial Intelligence, Computers and Law (August 2020); T Bergen QC and Q Tannock, available here: https://www.scl.org/articles/12007-ex-plaining-artificial-intelligence

3 The ICO’s detailed guidance on DPIAs is available, here: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-govern-ance/data-protection-impact-assessments/

4 Available, here: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/controllers-and-processors/. Also, the CJEU has considered controller, joint controller and processor determinations in both Fashion ID (Case C-40/17) and ULD v Wirtschaftsakademie (Case C-210/16).

profile picture of quentin tannock

Quentin Tannock is a barrister at 4 Pump Court, a barristers’ chambers with expertise in areas including information technology, telecommunications and professional negligence. Quentin has a broad commercial practice with particular focus on commercial litigation in the areas of technology and IP.