“no red lines, no green lights”: Ada Lovelace Institute publishes report on technology and the pandemic

July 7, 2020

The Ada Lovelace Institute has issued a new report called “no red lines, no green lights” about technology and the current pandemic. It sets out the lessons the Institute has learned from its  engagement with the public. The report aims to help government and policymakers navigate difficult dilemmas when deploying data-driven technologies to manage the pandemic, and when judging what risks are acceptable to incur for the sake of greater public health.

Data-driven tools and systems are being developed and tested for use in response to multiple challenges presented by COVID-19. COVID apps under consideration by government include contact tracing apps, immunity certification and digital health status apps. Technology could play a powerful role in supporting public health strategy, but using novel technologies to undertake a form of public monitoring or the creation of a form of public health monitoring will be controversial, and raises complex social issues. Contemplating their deployment is only justifiable in the face of – and for the duration of – a grave crisis.

Given the complexity and importance of these tools, they must be developed with public legitimacy for two reasons:

  • COVID-19 technologies will only be effective if they are adopted and adhered to by the public. This requires technical tools and policy architecture surrounding their use to be seen as acceptable and proportionate solutions.
  • Future apps may be vital to manage this health crisis or future crises. Getting COVID-19 technologies wrong now may block essential options for future technical solutions or undermine faith in public health strategies. 

To support technology developers and policymakers in designing tools that anticipate the preferences and mitigate the legitimate concerns of the public, the Institute has identified six lessons that should be considered in the design and deployment of COVID-19 technologies:

  • Trust is not just about data or privacy. To be trusted, technology needs to effective and be seen to solve the problem it is seeking to address.
  • People’s experiences and expressions of identity matter – and are complex. Categorising individuals can be reductive and disempowering.
  • Public health monitoring and identity systems are seen as high-stakes applications that will need to be justified as appropriate and necessary to be adopted.
  • Tools must proactively protect against errors, harms and discrimination, with legitimate fears about prejudice addressed directly.
  • Apps will be judged as part of the system they are embedded into – the whole system must be trustworthy, not just the data or the technology.
  • The technologies under discussion are not viewed as neutral. They must be conceived and designed to account for their social and political nature.

The lessons from the public offer neither clear green lights nor neat red lines, but the Institute points out that developing and deploying new technologies is not neat or easy, especially in a crisis.

The Institute has also published a blog post by Head of Policy, Imogen Parker, covering these themes.