Justice and Home Affairs Committee publishes report on new technologies in the justice system

March 31, 2022

The House of Lords Justice and Home Affairs Select Committee has published a report on new technologies in the justice system. The report says that AI technologies can have serious implications for a person’s human rights and civil liberties. The Committee says that it was taken aback by the proliferation of AI tools being used without proper oversight. As a result, it has made a series of recommendations.

Legal and institutional frameworks

The Committee sees serious risks that an individual’s right to a fair trial could be undermined by algorithmically manipulated evidence. Therefore, it favours precise documentation, evaluation by subject experts, and transparency where evidence may be subject to algorithmic manipulation. 

The use of advanced technologies in the application of the law poses a real and current risk to human rights and to the rule of law. 

The Committee recommends that the government rationalises the respective roles of departments regarding the use of new technologies in the application of the law. It also recommends that the government conduct a review to rationalise and consolidate governance structures of the use of technologies in the application of the law. The government should establish an independent, funded single national body to govern the use of new technologies for the application of the law. 

A stronger legal framework is required to prevent damage to the rule of law. Therefore, the Committee recommends primary legislation which embodies general principles, and which is supported by detailed regulations setting minimum standards. The government has endorsed principles of AI. In response to this report, the government should outline proposals to establish these firmly in statute. 

Guidance, both general and specific, is urgently needed. The government should require that national guidance for the use of advanced technological tools in policing and criminal justice is drawn up and, as part of their response to this report, should outline concrete plans for this. There is a need for a ‘one-stop shop’ collating all relevant legislation, regulation and guidance and drawing together high-level principles with practical user guides. 

There is no clear line of accountability for the misuse or failure of technological solutions used in the application of the law. As a result, no satisfactory recourse mechanisms exist. The government should appoint a taskforce to produce guidance to ensure that lines of accountability, which may differ depending on circumstances, are consistent across England and Wales. The taskforce should act transparently and consult with all affected parties. 

In its response to the report, the government should set out the circumstances in which it would be willing to deploy moratoriums in the future. The new national body should be empowered to refuse certification for a new tool under those circumstances.

Transparency

According to the Committee, one of the principles in the new statute should be transparency.

There are no systematic obligations on individual departments, public bodies, and police forces to disclose information on their use of advanced technological solutions. Therefore, technological solutions cannot be scrutinised and challenged.  This risks undermining trust in the police, the justice system, and the rule of law. The Committee urges the government to consider what level of openness would be appropriate to require of police forces regarding their use of advanced technologies. 

It says that full participation in the Algorithmic Transparency Standard collection should become mandatory, and it should include all advanced algorithms used in the application of the law that have direct or indirect implications for individuals. There should be penalties if entries are not completed. It should be user-friendly. 

Human-technology interactions

The Committee says that there is evidence that the users of advanced technologies are in many cases failing to engage, in a meaningful way, with the output of automated processes. Outputs may be overrated or misinterpreted, and challenge smothered, with potentially significant adverse consequences for individuals and litigants.

The Home Office should, in conjunction with the Ministry of Justice and the College of Policing, undertake or commission appropriate research on how the use of predictive algorithms affects decision making, and under what circumstances meaningful human interaction is most likely. 

The Committee endorses the ICO’s principles about meaningful interaction with technologies. These principles should be applied through mandatory training for officers and officials using advanced technologies as well as those working in the administration of justice. Institutional processes to enable challenge to algorithmic outcomes should be reviewed and inspected. These inspections should also assess whether the users of the relevant tool(s) are appropriately trained.

There should be a requirement upon producers of technological products to embed explainability within the tools themselves. The interface of tools should be designed to facilitate the experience of users: equipping them with the necessary information to interpret outputs, and an indication of the level of surety its outputs provide. The specifics of what should be explained will vary depending upon the context. The tool should reflect that variation, and encourage users to consider and challenge results. 

Evaluation and oversight

Comprehensive impact assessments should be made mandatory for each occasion an advanced technological tool is implemented in a new context or for a new purpose. They should include considerations of bias; weaknesses of the specific technology and associated datasets; and discursive consideration of the wider societal and equality impacts (including explanations of public consultations). Impact assessments should be regularly updated and open to public scrutiny.

Minimum scientific standards should be set centrally by the new national body. They should then be transposed into regulations through secondary legislation. Pre-deployment scientific evaluations of technological solutions designed for use in the application of the law would empower public bodies and agencies to use better and more effective tools more safely. 

Individual police forces are ill-equipped to carry out systematic evaluation of technological solutions: they have neither the resources nor the expertise. Tools are deployed when they have not been sufficiently evaluated; risking the use of tools that either cannot do the job, or that have unacceptable impacts on society.

The new national body should systematically certify technological solutions following evaluation and prior to their deployment. No technological solution should be deployed until the central body has confirmed that it meets the minimum standards. After a transition period, this requirement should retrospectively be applied to technological solutions already in use.  Police should still be able to procure technology solutions based on local needs, but they need extra support to become proficient customers of new technologies. Enhanced procurement guidelines are also needed. 

The government should continue work on the national data ethics governance body. It will need the independence, resources, and statutory underpinning to enable it to scrutinise the deployment of new technologies and act as a central resource of best practice. The Home Office should also encourage and facilitate the development of local or regional specialist ethics committees.  

Conclusion and next steps

The Committee argues that, as the use of new technologies is becoming routine, these proposed reforms will ensure that their potential is maximised while minimising the associated risks. They would reverse the status quo in which a culture of deference towards new technologies means the benefits are being minimised, and the risks maximised.

The government will now consider the report and issues its response.