Negating Schrodinger’s Justice through AI Transparency

In this opinion piece, Ben Taylor explains why he believes that building transparency into AI is critical to the future of automated decision-making platforms in the criminal justice system

William Perrin, former Cabinet Office civil servant and transparency campaigner, coined the phrase ‘Schrodinger’s justice’ to, in part, describe the risks involved with adopting automated decision-making and AI in the criminal justice system. The negative implication being that large parts of court proceedings and decision-making will happen in a virtual black box away from any kind of scrutiny.

While I wholeheartedly respect the sanctity of our legal system and believe it is absolutely necessary (in such an important area of government) to ensure that new technologies are transparent and unbiased, I can’t help thinking that the furore surrounding Schrodinger’s justice is just another example of sensationalism, scaremongering and the unhealthy amount of ignorance surrounding AI.  

Ignorance breeds fear and mistrust. An example of this is The Royal Society of Arts (RSA) and YouGov’s research earlier this year which found that 83% of the British public are unfamiliar with the use of automated decision-making in the criminal justice system. Despite this self-professed lack of familiarity, 60% said they oppose its use.

July 2018 saw the first public evidence-gathering hearing of the Law Society’s Technology and Law Policy Commission on ‘Algorithms in the Justice System’. Over the next six months the commission aims to examine the use of algorithms in the justice system in England and Wales and what controls, if any, are needed to protect human rights and trust in the justice system.

I had high hopes for the hearing but was disappointed to read that the main recommendation from the first session seemed to be that the Government should set up an independent public register of artificial intelligence systems to ensure that automated decision-making by the police, courts and other justice agencies is open to public scrutiny.

For me, that’s not really dealing with the issue!

There are already a few examples of Police Forces using algorithms.  Probably the most widely publicised is Durham Police’s HART algorithm pilot, which was created to assist custody officers in deciding whether or not a suspect should be released, kept in the cell, or made eligible for a local rehabilitation programme ‘Checkpoint’.

Durham's algorithm is a black box. Therefore, it isn't possible for the system to fully explain how it makes decisions. As a result HART has faced its fair share of negative publicity and accusations of discriminating against the poor.

Michael Barton, Chief Constable, of Durham Constabulary took part in the recent ‘Algorithms in the Justice System’ hearing to defend HART and reassure everyone that it is intended as a decision support tool and would never take the kind of nuanced decisions made by custody officers.

In the current AI climate, any black box automated decision-making platform that makes it difficult, or impossible, to understand exactly how a decision was reached is asking for trouble. The House of Lords Select Committee on AI has already expressed its view that ‘…it is unacceptable to deploy any AI system that could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take.’

Transparency has become a technical issue that developers must get right. I’ve written extensively about how developers can build transparency into AI systems and the issues stemming from machine learning and statistical methods of data analysis. In designing algorithms to learn answers, rather than have them explicitly programmed, most machine learning techniques, by their nature, are black box.

As an example, take an image classification task using the popular deep learning technique of Convolutional Neural Networks (CNN). CNN relies on a large network of weighted nodes but when it classifies an image it’s extremely difficult to understand the features extracted to make that classification. Consequently we are clueless as to what the nodes of a CNN actually represent.

‘Explainer algorithms’ that can be applied alongside other statistical techniques are one way of tackling this issue. Techniques such as Random Decision Forests (RDF) also open the door to better understanding of feature extraction.

Ultimately I believe what we really need are more technologies clearly modelling automated decision-making on human expertise — rather than black box data. AI is designed for collaboration with people, so building human expertise into the development of these platforms and enabling AI systems to provide clear explanations for the decisions they make (in a format that those same human experts understand and can confirm) is critical to the future of automated decision-making platforms in the criminal justice system and negating the risks of Schrodinger’s justice.

Technological breakthroughs will always be polarising because there are often both benefits and risks. But the ultimate success of AI and automated decision-making hinges on the uphill battle to build public trust. More explainable and transparent AI systems is the best way to win that battle.

Ben Taylor is CEO at Rainbird and advisor to the All-Party Parliamentary Group on Artificial Intelligence (APPG AI)

Published: 2018-09-30T09:20:00

    0 comments

      Please wait...