ExplAIning Artificial Intelligence

July 27, 2020

Introduction

The Information Commissioner’s Office (‘ICO’) recently published Guidance “co-badged” with The Alan Turing Institute, arising out of “Project ExplAIn”. The purpose of that Guidance titled “Explaining decisions made with AI” is to give organisations practical advice to help explain processes, services and decisions delivered or assisted by Artificial Intelligence (‘AI’), to the individuals affected by them.  

The ICO has classified AI as one of its top three priorities. In its 2018-2021 Technology Strategy, the ICO explains that this is because of “the ability of AI to intrude into private life and effect human behaviour by manipulating personal data.” From the perspective of the regulator, being able to ‘explain AI’ in the context of personal data and privacy is particularly important, whilst from the perspective of industry using AI to impact personal behaviour is increasingly common.

This article will focus on the practical guidance to organisations contained in the Guidance. The Guidance acknowledges that there is no “one size fits all” approach but rather sets out a series of principles that are to be applied depending on context.

This article, also, outlines emerging trends in relation to “challenger” models identified in the Project ExplAIn research together with the Guidance’s recommendations in relation to those “black box” models “whose inner workings and rationale are opaque or inaccessible to human understanding”, such as neural networks and ensemble methods.

Artificial Intelligence

The Guidance defines Artificial Intelligence (‘AI’) as “…an umbrella term for a range of technologies and approaches that often attempt to mimic human thought to solve complex tasks. Things that humans have traditionally done by thinking and reasoning are increasingly being done by, or with the help of, AI.”


The Legal Framework

GDPR (from January 2021, the equivalent articles in the UK GDPR) and the DPA 2018 regulate the collection and use of “personal data” – information about identified or identifiable individuals. A common thread running through the DPA 1998 cases is that personal data is “personal” in that it concerns the identity or privacy of a living individual.

As noted in the Guidance, data protection law is technology neutral. It does not directly reference AI or any associated technologies such as machine learning. Data protection law does, however, contain specific references to large-scale automated processing of personal data (including profiling), which will include large numbers of AI deployments, such as when AI is used to process personal data and make a prediction (and any subsequent recommendation) in relation to a living individual (in data protection parlance, a ‘data subject’).

Widespread AI deployments

AI is now deployed in very many different contexts. The Guidance gives three examples of healthcare, policing and marketing that show the breadth of application of AI

An example of AI processing of personal data, that might affect human behavior and that many readers may have experienced, is an AI enabled online film recommendation engine. Such an engine might make film recommendations to an individual that affect the film choices made by the relevant individual, with the effect that the recommended films are then purchased by that data subject.


The GDPR Articles of most relevance to AI identified and outlined in the Guidance are the:

  • Right to be informed: Articles 13 and 14 of the GDPR give individuals the right to be informed of: (i) the existence of solely automated decision-making producing legal or similarly significant effects; (ii) meaningful information about the logic involved; and (iii) the significance and envisaged consequences for the individual.
  • Right of access: Article 15 of the GDPR gives individuals the right of access to: (i) information on the existence of solely automated decision-making producing legal or similarly significant effects; (ii) meaningful information about the logic involved; and (iii) the significance and envisaged consequences for the individual.
  • Right to object: Article 21 of the GDPR gives individuals the right to object to processing of their personal data, specifically including profiling, in certain circumstances. There is an absolute right to object to profiling for direct marketing purposes.
  • Rights in relation to automated decision making: Article 22 of the GDPR gives individuals the right not to be subject to a solely automated decision producing legal or similarly significant effects. There are some exceptions to this and in those cases it obliges organisations to: (i) adopt suitable measures to safeguard individuals, including the right to obtain human intervention; (ii) express their view; and (iii) contest the decision. (It is relevant to note that (Non-binding) Recital 71 provides interpretative guidance on rights related to automated decision-making. It mainly relates to Article 22 rights, but also makes clear that individuals have the right to obtain an explanation of a solely automated decision after it has been made.)
  • DPIAs: Article 35 of the GDPR requires organisations to carry out Data Protection Impact Assessments (DPIAs) if their processing of personal data, particularly when using new technologies, is likely to result in a high risk to individuals. A DPIA is always required for any systematic and extensive profiling or other automated evaluation of personal data which are used for decisions that produce legal or similarly significant effects on people.

Where an AI-assisted decision is made by a process without any human involvement and it produces legal or similarly significant effects on an individual (something affecting an individual’s legal status/ rights, or that has equivalent impact on an individual’s circumstances, behaviour or opportunities, for example a decision about welfare or a loan) GDPR requires organisations to (i) be proactive in “…[giving individuals] meaningful information about the logic involved, as well as the significance and envisaged consequences…” (Articles 13 and 14);  “… [give individuals] at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.” (Article 22); and  (iii) “… [give individuals] the right to obtain… meaningful information about the logic involved, as well as the significance and envisaged consequences…” (Article 15) “…[including] an explanation of the decision reached after such assessment…” (Recital 71).

Where an AI-assisted decision uses personal data but there is meaningful human involvement in the process, it is still subject to all the GDPR’s principles of which the principles of fairness, transparency and accountability are of particular relevance. In simple terms:

  • Fairness means that an organisation should only handle personal data in ways that people would reasonably expect and not use it in ways that have unjustified adverse effects on them. In the context of AI, it is likely that if an AI-assisted decision is taken in relation to a person without explanation or information about the decision being given, this might limit the person’s autonomy and, consequently, be unfair.
  • Transparent processing is about being clear, open and honest with people about who the organisation using personal data is, and how and why it uses their personal data.  A failure to provide any explanation in relation to how and why an AI-assisted decision is taken is likely in our view to mean that the decision will not be considered transparent within the meaning of the GDPR.
  • Accountability requires an organisation to take responsibility for what it does with personal data and how it complies with the other principles. The organisation must have appropriate measures and records in place to be able to demonstrate compliance.  Achieving and demonstrating compliance with the principles outlined in GDPR Article 5, such as data-minimisation and accuracy, can be challenging in the context of the development and deployment of AI solutions. For example, there may be a tension between increased accuracy in an AI solution, achieved by processing additional volume of personal data, and the principle of data-minimisation.

The DPA 2018 and GDPR are, naturally, not the only examples of statutory control of AI deployments in the United Kingdom. Another example is found in the Equality Act 2010. 

Equality Act 2010

The Equality Act 2010 prohibits behavior that discriminates, harasses or victimises a person on the basis of “protected characteristics”. Organisations deploying AI will need to take care that their doing so does not result in discrimination, harassment or victimisation within the meaning of the Equality Act 2010. One way in which an AI deployment might do so is if it perpetuates historical discriminatory practices.


Why ExplAIn AI?

While innovative and data-driven technologies such as AI create enormous opportunities, they also present some of the biggest risks related to the use of personal data. In order to be compliant with data protection law, organisations must design, deploy and use AI systems that recognise the data subject’s rights (i) to be informed of solely automated processing; (ii) to object to processing, in particular profiling (and an absolute right to do so in relation to direct marketing); (iii) not to be subject to a solely automated decision making producing legal or similarly significant effects; (iv) to access to information and explanation of solely automated decisions after they have been made.

The touchstone is transparency, which is required by data protection law and which has the significant benefit of building the trust of regulators, consumers and other business stakeholders.

By considering at each stage of an AI enabled process or AI-assisted decision the obligation to be able to provide explicit, clear and meaningful explanations, organisations can identify and avoid or, at least, mitigate future issues and allegations of bias or discrimination. Such detailed consideration may also help an organisation improve its internal decision making and governance, informing the implementation of practical safeguards.  

The ExplAIn Guidance

The Guidance uses “AI” as an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking. The processes may be fully automated or may provide an output to be assessed by a human.

Not all AI involves personal data, in which case it will fall outwith the Guidance. The processing of personal data by AI will engage the Guidance, and very often, AI does use or create personal data (sometimes in vast quantities).

In recent years, machine learning (ML) models have emerged as a dominant AI technology. The aspects of AI comprising ML involve data processing to identify statistical patterns and correlations, frequently from large data sets. The Guidance focuses on “supervised learning”, the most widely used approach to ML. Supervised learning models are trained on a dataset which contains labelled data. “Learning” occurs in these models when numerous examples are used to train an algorithm to map input variables onto desired outputs.

As outlined in the Guidance, the output of an AI model is generally one of three types: (i) a prediction; (ii) a recommendation; or (iii) a classification. The Guidance is largely concerned with “AI decisions” which encompass all three types of output and whether fully automated or involving human intervention.

Part 1 – explanation types

The Guidance identifies six main types of explanation, which can be broadly categorized as either “process” or “outcome” based explanations. Context is a key aspect of explaining decisions involving AI. Several factors about the decision, the person, the application, the type of data, and the setting, all affect what information an individual expects or finds useful. In any specific context these classifications may not prove to be apposite, but they are at least a useful starting point. 

The Guidance outlines that the six main types of explanations are:

(1) A “Rationale explanation” involves explaining the reasons that led to a decision, which should be delivered in an accessible and non-technical way. A rational explanation helps people understand the reasons that led to a decision outcome, in an accessible way. That explanation may inform their decisions as to challenging the decision or changing their behaviour.

(2) A “Responsibility explanation” involves explaining who is involved in the development, management and implementation of an AI system, and who to contact for a human review of a decision. A responsibility explanation will clarify the roles and functions across the organisation that are involved in the various stages of developing and implementing the AI system, including any human involvement in the decision-making.

(3) A “Data explanation”: involves explaining what data has been used in a particular decision and how. A Data explanation helps people understand what data about them, and what other sources of data, were used in a particular AI decision. 

(4) A “Fairness explanation” involves explaining the steps taken across the design and implementation of an AI system to ensure that the decisions it supports are generally unbiased and fair, and whether or not an individual has been treated equitably. A fairness explanation helps people understand the steps the organisation took (and continues to take) to ensure its AI decisions are generally unbiased and equitable. It also gives people an understanding of whether or not they have been treated equitably themselves.

(5) A “Safety and performance explanation” involves explaining the steps taken across the design and implementation of an AI system to maximise the accuracy, reliability, security and robustness of its decisions and behaviours. A safety and performance explanation helps people understand the measures the organisation has put in place, and the steps the organisation has taken (and continues to take) to maximise the accuracy, reliability, security and robustness of the decisions the AI model helps it to make. 

(6) An “Impact explanation” involves explaining the steps taken across the design and implementation of an AI system to consider and monitor the impacts that the use of an AI system and its decisions has or may have on an individual, and on wider society. An impact explanation helps people understand how the organisation has considered the effects that its AI decision-support system may have on an individual. It is also about helping individuals to understand the broader societal effects that the use of the system may have. 

The Guidance notes that an organisation may be called upon to provide an explanation in more than one context. For example, the explanation may be needed by staff whose decisions are supported by the AI system and who need to relay meaningful information to an individual affected by the AI-assisted decisions. Equally an explanation may be sought by that very individual or by an external auditor or regulator. 

The transparency requirements of the GDPR (at least in cases of relevant solely automated AI decisions) encompass (i) an organisation providing meaningful information about the logic, significance and envisaged consequences of the AI decision; (ii) the right to object; and (iii) the right to obtain human intervention.

The Guidance draws a distinction, relevant to each of the explanation types, between:

(1) Process-based explanations of AI systems, which involve demonstrating compliance with good governance processes and best practices throughout the design and use of the system.

(2) Outcome-based explanations of AI systems, which involve clarifying the results of a specific decision. They involve explaining the reasoning behind a particular algorithmically-generated outcome in plain, easily understandable, and everyday language.   

Part 2 – explaining AI in practice

The Guidance is task-based and intended to assist organisations in the design and deployment of appropriately explainable AI systems and in providing clarification of the results these systems produce to a range of affected individuals. However, the tasks are themselves dependent on the explanation types, set out above.

  • The first task is to select priority explanations by considering the domain, use case and impact on the individual, to separate out the different aspects of an AI-assisted decision that people may want explained.
  • The second task is to collect and pre-process data in an explanation-aware manner, with a view to ensuring a high quality explanation.
  • The third task is to build the system to ensure the organisation is able to extract relevant information for a range of explanation types. The model chosen should be at the right level of interpretability for the use case and for the impact it will have on the decision recipient.
  • The fourth task is to translate the rationale of the system’s results into useable and easily understandable reasons, to determine how to convey the model’s statistical results to users and decision recipients as understandable reasons.
  • The fifth task is to prepare implementers to deploy the AI system. Human decision-makers who are meaningfully involved in an AI-assisted outcome must be appropriately trained and prepared to use the model’s results responsibly and fairly. Training should include conveying basic knowledge about the nature of machine learning, and about the limitations of AI and automated decision-support technologies.
  • The sixth task is to consider how to build and present the explanation, whether through a website or app, in writing or in person.
Supplementary models

‘Supplementary models’ are models that provide supplementary explanations (or enable supplementary explanation strategies) for black box models; in short, they are aids to the interpretability of black box models.

The Guidance provides a useful, although brief, summary of ‘supplementary models’ in its Annex 3.

An example type of supplementary tool is the so-called “surrogate model” (SM), which builds a simplified proxy of a more complex black box model. Being a simplified proxy, SM’s will often fail to make non-linear and multi-layer interactions within a black box model explicit.

Visualisation tools (such as Partial Dependence Plots and Accumulated Local Effects Pots) represent further examples of such supplementary models.


‘Black box’ models

The Guidance describes a ‘black box’ AI system as “an AI system whose inner workings and rationale are opaque or inaccessible to human understanding”.  Frequently, such AI models are the most effective but they can, naturally, be the most difficult to explain.  

An example black box AI system is one based on an artificial neural net (ANN), which builds functions to predict and/or classify data through trained, interconnected and ‘layered’ operations. Further examples include ensemble methods (such as the ‘random forest’ method of supporting overall predictions using aggregated results from several models) and support vector machines.

The Guidance suggests that, during the model selection phase, the risks of using a black box model should be identified and “evidence of the use case and organisational capacities and resources to support the responsible design and implantation of these systems” should be retained.

The Guidance, also, suggests the use of supplementary tools to explain any black box models that are deployed, with documentation being created in advance to explain how supplementary tools do so and to demonstrate “how the use of the tool will help you to provide meaningful information about the rationale of any given outcome.” When presenting information derived from such supplementary tools, indicators of the limitations and uncertainty of the tools should also be provided.  

‘Challenger’ models

One finding of Project ExplAIn’s research was that many organisations in highly regulated sectors, such as banking and insurance, that currently use AI do so using relatively transparent and explicable AI models and are starting to do so in combination with so called “challenger” models that may be less transparent and explicable.  

The Guidance discusses this trend under the topic “hybrid models” stating that “when you select an interpretable model to ensure explainable data processing, you should only carry out parallel use of opaque ‘challenger’ models for purposes of feature engineering/selection, insight, or comparison if you do so in a transparent, responsible, and lawful manner.” 

In the context of explanations, challenger models can be used for ‘feature selection’ in the sense of reducing the number of variables or ‘feature engineering’ to combine variables. When used in this way, challenger models can simplify and thus enhance the interpretability (and, possibly, explainability) of a production model. The Guidance records that “If you use challenger models for this purpose, you should make the process explicit and document it. Moreover, any highly engineered features that are drawn from challenger models and used in production models must be properly justified and annotated in the metadata to indicate what attribute the combined feature represents and how such an attribute might be a factor in evidence-based reasoning.” 

Naturally, if an organisation uses a challenger model for processing personal data and making an actual decision that affects a data subject, then the same data protection principles and requirements for explanation will apply to the challenger model as to the current champion. The Guidance also suggests that when an organisation uses challenger models alongside more interpretable models, the purpose of the challenger models and how they will be used should be explicit and that organisations should “…should treat them as core production models, document them, and hold them to the same explainability standards, if you incorporate the insights from this challenger model’s processing into any dimension of actual decision- making.”

Part 3 – executive summary: what this means 

Part 3 of the Guidance is aimed at senior executives (rather than, for example, software engineers) and provides an outline of the organisational roles behind providing explanations to ‘decision recipients’, reviewing the policies and procedures, together with the documentation that can help ensure an organisation is able to provide appropriate explanations.

In simple terms, the Guidance anticipates that senior management have overall responsibility for ensuring appropriate explanations are given, whilst compliance teams seek to ensure that AI development and deployment meets internal policies and external regulatory requirements, based on appropriate information and assurances from AI product managers.  

This part of the Guidance notes the important function of the AI development team and that this team might be external, where the AI solution is purchased from a third party, stressing that human implementers should be properly trained and supported (including by any third party external supplier). Importantly, where AI solutions (or significant parts of such solutions) are bought or licensed-in, the Guidance is clear that the deploying organisation (the purchaser or licensee) as the data controller “[has] the primary responsibility for ensuring that the AI system you use is capable of producing an appropriate explanation for the decision recipient”.

Conclusion

In our view, AI is a dynamic and fascinating area of endeavour, in which many lawful deployments are likely to have significant commercial value. In this context, we think that the pragmatic and clear approach in the Guidance, including its focus on outlining principles, identifying decision makers and assessing business processes is to be welcomed.  

The use of AI to process personal data can be technically complex and may engage complex legal rules, many of which are relatively new. The relatively rapid development of AI technologies and increasing deployments across a very wide range of industry areas add to the challenges in advising AI developers and deployers.

As might be expected in such a dynamic and commercially relevant area, the corpus of UK and European court decisions and guidance is expanding relatively rapidly. Of relevance to explaining AI-enabled decisions alone, in addition to Project ExplAIn, the past and future work of the European Commission’s 52 member High-Level Expert Group on AI, the European Data Protection Board (EDPB) and the UK’s Centre for Data Ethics and Innovation (CDEI) (part of the UK Government Department for Digital, Culture, Media and Sport), among others, may all be engaged. Whilst much of this expansion is helpful and relevant (such as the Guidance outlined in this article), keeping up to date can be challenging for those who work and provide legal and strategic advice in this space.

One specific challenge for organisations in providing explanations of AI-enabled decisions may arise from the tension between commercial confidentiality and the regulatory requirements, namely achieving appropriate explanations whilst avoiding the public disclosure of information that might breach confidentiality undertakings to others (such as suppliers or business partners), or that might place information in the public domain that (whilst not the subject of confidentiality undertakings) is commercially sensitive. In our experience, this is an area where specialist legal advice may be useful.

In conclusion, we think that the Guidance is helpful in equipping organisations to shine a light into this particular “pea-souper”, although doubtless many challenges remain with respect to explaining AI (and in other respects) both for those organsations that deploy AI and for their advisors.

Terence Bergin QC and Quentin Tannock are Barristers at 4 Pump Court, a barristers’ chambers with expertise in areas including information technology, telecommunications and professional negligence.

profile picture of terence bergin qc