Explaining AI decision making: ICO and the Alan Turing Institute consult on joint guidance

December 2, 2019

The ICO and the Alan Turing Institute have launched a consultation on their joint guidance on how to explain decisions made with AI. The guidance aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.  

Increasingly, organisations are using artificial intelligence to support, or to make decisions about individuals. The ICO and the Alan Turing Institute aim to ensure that the guidance has practical application in the real world, so organisations can easily use it when developing AI systems. They seek feedback in this regard. AI is a key area of focus for the ICO.

The guidance is not a legally binding statutory code of practice under the Data Protection Act 2018, but aims to set out good practice for explaining decisions to individuals that have been made using AI systems processing personal data. It clarifies the application of data protection legal provisions associated with explaining AI decisions, as well as highlighting other relevant legal regimes outside the ICO’s remit.

The guidance consists of three parts:

Part 1 explains the basics of AI. It defines the key concepts and outlines a number of different types of explanations. It is relevant for all members of staff involved in the development of AI systems. It highlights that one of the key differences between a decision that has been made by an AI system, and one where no AI system has been used, relates to the person an individual can hold accountable for the decision made about them. When it is a decision made directly by a human, it is clear who the individual can go to in order to get an explanation about why they made that decision. However, where an AI system is involved, the responsibility for the decision can be less clear. Individuals should not lose accountability when a decision is made with the help of, or by, an AI system, rather than solely by a human. Where an individual would expect an explanation from a human, they should instead expect an explanation from those accountable for an AI system. 

To ensure that the decisions made using AI are explainable, organisations should follow four principles:

  • be transparent
  • be accountable
  • consider the context you are operating in,
  • reflect on the impact of your AI system on the individuals affected, as well as wider society.

The guidance identifies six “explanation types” for AI as well as the risks and benefits of explaining, or not explaining, AI decisions.

Part 2 covers how to explain AI in practice and aims to help organisations with the practicalities of explaining decisions and providing explanations to individuals. This will primarily be helpful for the technical teams in organisations, as well as an organisation’s DPO and/or compliance team. It goes through the steps organisations can take to explain AI-assisted decisions to individuals. It starts with how to choose which explanation type is most relevant for a particular use case, and what information should be put together for each explanation type. For most of the explanation types, the information can be derived from organisational governance decisions and documentation. However, given the central importance of understanding the underlying logic of the AI system for AI-assisted explanations, the guidance provides technical teams with a comprehensive guide to choosing appropriately interpretable models. This depends on the use case. The guidance also indicates how to use supplementary tools to extract elements of the model’s workings in ‘black box’ systems. Finally, the guidance shows how to deliver explanations, containing the relevant explanation types chosen, in the most useful way for the decision recipient.

Part 3 deals with what explaining AI means for organisations. It covers the various roles, policies, procedures and documentation that organisations can put in place to ensure that they are able to provide meaningful explanations to affected individuals. It is primarily targeted at senior management teams, but DPOs and compliance teams will also find it useful.

The final version of the guidance will be published later in 2020, taking the feedback into account, and will be kept under review.