AI use in the public sector: “urgent need for guidance and regulation, especially on the issues of transparency and data bias”

Committee on Standards in Public Life report published: suggests clear standards of conduct and greater transparency will help UK public sector to reap the benefits of AI

The Committee on Standards in Public Life has published its report and recommendations to government to ensure that high standards of conduct are upheld as technologically assisted decision making is adopted more widely across the public sector.

Artificial intelligence, especially machine learning, will transform the way public sector organisations make decisions and deliver public services. AI must be subject to appropriate safeguards and regulations so that the public has confidence that new technologies will be used in a way that upholds the Seven Principles of Public Life, which are selflessness, integrity, objectivity, accountability, openness, honesty and leadership.

The Committee says that the UK’s regulatory and governance framework for AI in the public sector remains a work in progress and deficiencies are notable. It says that the work of the Office for AI, the Alan Turing Institute, the Centre for Data Ethics and Innovation, and the Information Commissioner’s Office are all commendable. However, there is an urgent need for guidance and regulation, especially on the issues of transparency and data bias.

To uphold public standards, government and public sector organisations should set effective governance to mitigate the risks the Committee has identified. The government needs to identify and embed authoritative ethical principles and issue accessible guidance. The government and regulators must also establish a coherent regulatory framework setting clear legal boundaries on how AI should be used in the public sector. 

Regulators must also prepare for the changes AI will bring. The Committee concludes that the UK does not need a new AI regulator, but that all regulators must adapt to the challenges that AI poses to their sectors. The Committee endorses the government’s intention to establish CDEI as an independent, statutory body that will advise government and regulators in this area.

Upholding public standards will also require action from public bodies using AI to deliver frontline services. All public bodies must comply with the law surrounding data-driven technology and implement clear, risk-based governance for their use of AI.

The Committee has made recommendations in the following areas:

Ethical principles and guidance - there are currently three different sets of ethical principles, the FAST SUM Principles, the OECD AI Principles, and the Data Ethics Framework. It is unclear how these work together and public bodies may be uncertain over which principles to follow. The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets. The guidance by the Office for AI, the Government Digital Service and the Alan Turing Institute on using AI in the public sector should be made easier to use and understand, and promoted extensively.

Articulating a clear legal basis for AI - all public sector organisations should publish a statement on how their use of AI complies with relevant regulation before AI is deployed.

Data bias and anti-discrimination law - the Equality and Human Rights Commission should develop guidance in partnership with both the Alan Turing Institute and the CDEI on how public bodies should best comply with the Equality Act 2010.

Regulatory assurance body - the Committee recommends that there is a regulatory assurance body to identify gaps in the regulatory landscape and to provide advice to individual regulators and government on AI issues.  The Committee does not recommend the creation of a specific AI regulator and endorses the government’s intention for CDEI to perform a regulatory assurance role. 

Procurement rules and processes – the government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. This should be achieved by ensuring ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements.

The Crown Commercial Service’s Digital Marketplace - the Crown Commercial Service should introduce practical tools as part of its new AI framework to help find AI products and services meeting ethical requirements.

Impact assessment – the government should consider how an AI impact assessment requirement could be integrated into existing processes to evaluate the potential effects of AI on public standards. Such assessments should be mandatory and should be published.

Transparency and disclosure – the government should establish guidelines for public bodies about the declaration and disclosure of their AI systems.

Evaluating risks to public standards - providers of public services, both public and private, should assess the potential impact of a proposed AI system on public standards at project design stage, and ensure that risks are mitigated. This should also occur every time a substantial change to the design of an AI system is made.

Diversity - providers of public services must consciously tackle issues of bias and discrimination by taking into account a diverse range of behaviours, backgrounds and points of view. They must take into account the full range of diversity of the population and provide a fair and effective service.

Upholding responsibility - providers should ensure that responsibility for AI systems is clearly allocated and documented, and that operators of AI systems are able to exercise their responsibility in a meaningful way.

Monitoring and evaluation - providers should monitor and evaluate their AI systems to ensure they always operate as intended.

Establishing oversight – providers should set oversight mechanisms that allow their AI systems to be properly scrutinised.

Appeal and redress – providers must always inform citizens of their rights and method of appeal against automated and AI-assisted decisions.

Training and education – providers should ensure their employees working with AI systems undergo continuous training and education. 

Published: 2020-02-11T11:00:00

    Please wait...