Office for AI issues Framework for Automated Decision-Making in the public sector

Seven point framework aims to improve government use of automated or algorithmic decision-making systems.

According to recent surveys by the EU and the British Computer Society, there is a distinct distrust in the regulation of advanced technology. A review by the Committee on Standards in Public Life found that the government should produce clearer guidance on using artificial intelligence ethically in the public sector.

Automated decision-making refers to both solely automated decisions (no human judgement) and automated assisted decision-making (assisting human judgement). Current guidance can be lengthy, complex and sometimes overly abstract. Decision-makers should not assume that automated or algorithmic decision-making is a ‘fix-all’ solution, particularly for the most complex problems. 

Therefore, the new seven point framework has been published with the aim of helping government departments with the safe, sustainable and ethical use of automated or algorithmic decision-making systems. It has been developed in line with guidance from government (such as the Data Ethics Framework) and industry, as well as relevant legislation. 

Government departments should use the framework with existing organisational guidance and processes.  The framework sets out common risk areas:

  • input data, including biased, outdated datasets;
  • algorithm design, including flawed assumptions and bias logic;
  • output decisions, including incorrect interpretation;
  • technical flaws, including insufficient rigour in development and testing;
  • usage flaws, including integration with existing operations; and
  • security flaws, including deliberate flawed outcomes.

When departments use automated decision-making in a service, they should consider the following seven points:

  • Test to avoid any unintended outcomes or consequences.
  • Deliver fair services for all users and individuals.
  • Be clear who is responsible.
  • Handle data safely and protect individuals’ interests.
  • Help users and individuals understand how the technology affects them.
  • Ensure that the use of automated systems is compliant with the law.
  • Build something that is future proof.

The framework includes practical examples and case studies in certain sectors, such as healthcare, policing and fintech.

Published: 2021-05-18T10:00:00

    Please wait...