CDEI: questions of algorithmic bias cannot be separated from biased decision-making more broadly

November 30, 2020

The UK government commissioned the Centre for Data Ethics and Innovation to review the risks of bias in algorithmic decision-making. The review formed a key part of the CDEI’s 2019/2020 Work Programme, although its completion was delayed by the pandemic. The resulting report is the final one of the CDEI’s review and includes a set of formal recommendations to the government.  The CDEI has also published a blog post on the topic.

The government asked the CDEI to draw on expertise and perspectives from stakeholders across society to provide recommendations on how they should address this issue. The CDEI report also provides advice for regulators and industry, aiming to support responsible innovation and help build a strong, trustworthy system of governance. The government has committed to responding to the CDEI’s review.

The CDEI focused on the use of algorithms in significant decisions about individuals. The review looks at the use of algorithmic decision-making in four sectors (policing, local government, financial services and recruitment) and makes recommendations aiming to help build the right systems so that algorithms improve, rather than worsen, decision-making. These sectors were selected because they all involve significant decisions about individuals, and because there is evidence of both the growing uptake of algorithms and historic bias in decision-making in these sectors.

The measures that the CDEI has proposed are designed to produce a significant change in the behaviour of all organisations making life-changing decisions on the basis of data, with a focus on improving accountability and transparency. Key recommendations include:

  • The government should place a mandatory transparency obligation on all public sector organisations using algorithms that have an impact on significant decisions affecting individuals.
  • Organisations should be actively using data to identify and mitigate bias. They should make sure that they understand the capabilities and limitations of algorithmic tools, and carefully consider how they will ensure fair treatment of individuals.
  • The government should issue guidance that clarifies the application of the Equality Act to algorithmic decision-making. This should include guidance on the collection of data to measure bias, as well as the lawfulness of bias mitigation techniques (some of which risk introducing positive discrimination, which is illegal under the Equality Act). There is recognition of the need for equality impact assessments.

The CDEI will help industry, regulators and government in taking forward the practical delivery work to address the issues it has identified and future challenges which may arise.