Public distrust is the most fundamental brake on AI

Findings set out in an AI Barometer published Centre for Data Ethics & Innovation.

The CDEI has published its AI Barometer, which analyses opportunities, risks, and governance challenges associated with AI and data use in the UK, across the criminal justice, financial services, health & social care, digital & social media and energy & utilities sectors.

What are the key findings?

The AI Barometer highlights the potential for AI and data-driven technology to address society’s greatest challenges. However, the CDEI analysis suggests we have only begun to tap into the potential of this technology, for example in improving content moderation on social media, supporting clinical diagnosis in healthcare, and detecting fraud in financial services. Even those sectors that are mature in their adoption of digital technology (such as the finance and insurance industry) have yet to maximise the benefits of AI and data use.

Some opportunities are easier to realise than others. These involve the use of AI and data to free up time for professional judgement, improve back-office efficiency and enhance customer service. ‘Harder to achieve’ innovations, in contrast, involve the use of AI and data in high stakes domains that often require difficult trade-offs (for example police forces seeking to use facial recognition must carefully balance the public’s desire for greater security with the need to protect people’s privacy).

The report also considers well-known risks, for example technologically-driven misinformation in healthcare. However, the report also highlights risks that are less prominent in media and policy discussions, for instance the differences between how data is collected and used in healthcare and social care, and how that limits technological benefits in the latter setting.

A number of concerns were raised across most contexts. These include the risks of algorithmic bias, a lack of explainability in algorithmic decision-making, and the failure of those operating technology to seek meaningful consent from people to collect, use and share their data.

Several barriers stand in the way of addressing these risks and maximising the benefits of AI and data. These range from market disincentives (such as social media firms fearing a loss of profits if they take action to mitigate disinformation) to regulatory confusion (perhaps oversight of new technologies like facial recognition can fall between the gaps of regulators).

These barriers can be addressed by incentives, rules and cultural change.  There are examples of promising interventions from regulators, researchers and industry, which could pave the way for more responsible innovation.

Three types of barrier merit close attention:

  • low data quality and availability; 
  • a lack of coordinated policy and practice; 
  • and a lack of transparency around AI and data use. 

Each contributes to a more fundamental brake on innovation – public distrust. In the absence of trust, consumers are unlikely to use new technologies or share the data needed to build them, while industry will be unwilling to engage in new innovation programmes for fear of meeting opposition and experiencing reputational damage.

What happens next?

Over the coming months, the CDEI will promote the findings of the AI Barometer to policymakers and other decision-makers across industry, regulation and research. The AI Barometer itself will also be expanded over the next year, looking at new sectors and gathering more cross-sectoral insights. Additionally, the CDEI is embarking on a new programme of work that will aim to address many of the barriers identified in the AI Barometer as they arise in different settings, from policing to social media platforms.

Published: 2020-06-19T16:00:00

    Please wait...