Consultation open until 14th June with interim report to follow Summer 2019
The Centre for Data Ethics and Innovation has issued a call for evidence to inform its two reviews about online targeting and bias in algorithmic decision making.
The purpose of the online targeting review is to analyse the use of online targeting approaches and to make practical recommendations to the UK government, industry and civil society about how online targeting can be conducted and governed in a way that facilitates the benefits and minimises the risks it presents.
The Centre’s definition of online targeting centres around the customisation of products and services online (including content, services standards and prices) based on data about individual users. Instances of online targeting can include online advertising and personalised social media feeds and recommendations.
The focus of the review is the effects of online targeting on individuals, organisations, and society. In particular, the Centre is considering how targeting approaches can undermine or reinforce the concept of autonomy - that is, individuals’ ability to make choices freely and based on information that is as full and complete as reasonably possible. The review also examines whether the effects of online targeting practices might be experienced more profoundly by vulnerable people, and whether it might contribute to a reduction in the reliability of news and advertising content seen online. Finally, the review considers the data and mechanisms involved in online targeting and how these affect privacy and data protection principles.
This will complement the Department for Digital, Culture, Media and Sport’s workstreams on online harms and online advertising, and the Information Commissioner’s work on advertising technology, online political advertising, and age-appropriate design for online services.
The Centre requests views on the following questions:
The deadline for responses is 14 June 2019. A summary of responses will be published over the summer. The Centre will then develop and recommend governance and, where relevant, other types of solutions, focused on areas and audiences identified during the review.
Bias in algorithmic decision making
Machine learning algorithms often work by identifying patterns in data and making recommendations accordingly. This can support good decision making, reduce human error and combat existing systemic biases. However, issues can arise if, instead, algorithms begin to reinforce problematic biases, either because of errors in design or because of biases in the underlying data sets. When these algorithms are then used to support important decisions about people’s lives, for example determining their credit rating or their employment status, they have the potential to cause serious harm.
The CDEI will investigate if this is an issue in certain key sectors, as well as the extent of this, and produce recommendations to the UK government, industry and civil society about how any potential harms can be minimised. The sectors are financial services, crime and justice, recruitment and allocation of local government resource.
The Centre is concerned with the following issues:
The deadline for general responses, as well as responses on crime and justice and financial services, is 14th June. The deadline for responses on local government and recruitment is 19th July.
An interim report will be published by Summer 2019, and a final report, including recommendations to government, by March 2020.