Fully automated AI decision-making systems: some thoughts from the ICO

August 8, 2019

The full post can be viewed here but is summarised below.

———————–

The GDPR requires that organisations implement suitable safeguards when processing personal data to make solely automated decisions that have a legal or similarly significant impact on individuals. These safeguards include the right for data subjects to:

  • obtain human intervention
  • to express their point of view
  • and to contest the decision made about them. 

Such safeguards cannot be token gestures. Guidance published by the European Data Protection Board states that human intervention should involve a review of the decision, which “must be carried out by someone who has the appropriate authority and capability to change the decision”.  The review should include a “thorough assessment of all the relevant data, including any additional information provided by the data subject.”

The type and complexity of the systems involved in making solely automated decisions will affect the nature and severity of the risk to people’s data protection rights as well as compliance and risk management challenges.

Basic systems, which automate a relatively small number of explicitly written rules, are unlikely to be considered AI. It should also be relatively easy for a human reviewer to identify and rectify any mistake if a decision is challenged by a data subject because of system’s high interpretability.

However more complex systems, such as those based on machine learning, present greater challenges for meaningful human review. Machine learning systems make predictions or classifications about people based on data patterns. Even when they are highly accurate, they will occasionally reach the wrong decision in an individual case and these errors may not be easy for a human reviewer to identify, understand or fix.

While not every challenge on the part of data subject will be valid, organisations should expect that many could be and that the assumptions in the AI design will be challenged. 

Accordingly, organisations should: 

  • consider the system requirements necessary to support a meaningful human review from the design phase, in particular, interpretability and effective user-interface design to support human reviews and interventions
  • design and deliver appropriate training and support for human reviewers
  • give staff the appropriate authority, incentives and support to address or escalate data subjects’ concerns and, if necessary, override the AI system’s decision.

Importantly, the use of solely automated decision making systems will always trigger the need for a data protection impact assessment. Impact assessments are not just a compliance requirement but can help organisations reflect on whether deployment of a solely automated process is appropriate. The impact assessment should consider carefully both the level of complexity and interpretability of the system and the organisation’s ability to adequately protect the individual’s rights. If a system’s complexity is cannot be explained, it may prove difficult to contest a review. Information about the logic of a system and how decisions are made should give data subjects the necessary context to decide whether, and on what grounds, they would like to request human intervention. 

The review process should be simple and user friendly. Organisations are expected to keep a record of 

  • all decisions made by an AI system
  • whether someone has requested human intervention, expressed any views, contested the decision
  • and whether the decision was changed as a result.

Organisations should monitor and analyse this data. If decisions are regularly changed in response to a challenge, their systems should be amended. Again this is not just about compliance: it is an opportunity to improve the performance of an AI system and, in turn, build trust in them. Of course, if grave or frequent mistakes are identified, organisations will need to take immediate steps to understand and rectify the underlying issues and, if necessary, suspend the use of the automated system.

To hear discussion about some of issues touched on in this post and a wider view of the AI Auditing Framework project, listen to our podcast recorded with the team and published here.