Algorithms and the Rule of Law

March 25, 2017

 The increasing prevalence of software in
decision-making processes raises very important and urgent questions regarding
how society protects fundamental and basic values, such as equality, fairness
and the rule of law.

Examples

Consider for a moment these examples of the
application of machine learning
that
may alter the balance of power in relationships between
private individuals and landlords, and criminal defendants and the state.
 

 Tenant Assured’ enables landlords to
vet potential tenants in advance by scraping information from various social
networks, build a picture of their personality and financial situation, and
provide an overall score.[1]
The privacy implications are obvious, but these are not the most interesting
issues. The profile for a prospective tenant is assembled by algorithms drawing
conclusions from data from a variety of sources. The Tenant Assured website
claims ‘If an applicant is using a lot of negative words and regularly
argumentative online, then he/she will likely have a negative online
reputation. And vice versa.’ Leaving aside the question of whether this is a
legitimate processing of personal data, a valid conclusion, or a relevant
consideration in deciding whether to rent a property, consider the number of
subjective choices implicit in that statement, and how these must be converted
to some quantitative measure. What is a negative word? What are a lot of them?
What is regular? Programmers will have made decisions about all of these
factors, which will be embedded in difficult-to-change software code, which is
not available to the public to challenge or correct. Instead individuals are
told that they can’t rent a particular apartment because their online score is
not high enough. This happens for reasons that cannot be adequately explained
to them because the code is secret and probably difficult to understand,
especially by a non-programmer.
 

Secondly, several jurisdictions in the USA
are relying on computer systems to predict outcomes for criminal suspects and
defendants.[2]
These are used to identify individuals who are likely to commit crime and
locations where they are likely to do so – by judges to assist in determining
sentences and by prison officials in deciding how to manage particular
prisoners. Eric L Loomis has challenged the use of the Compas algorithm
developed by Northpointe Inc, as part of his sentencing.[3]
The Wisconsin Supreme Court has allowed continued use of this system but has highlighted
that it is only one factor to be considered when passing sentence.[4]
On a more positive note, a pilot project in
New York City using big data and machine learning proved more accurate than judges
at predicting which defendants were likely to offend if released on bail, while
not considering their race, a development which is likely to lead to a re-balancing
of the prison population.[5] 

Thirdly,
Joshua Browder, a British Stanford University student, has developed a
chatbot-based expert system, DoNotPay.co.uk,
which assists with challenges to parking tickets. Tens of thousands of tickets
have been overturned with its help, and he has expanded the system to assist
with applications for refugee status.[6]

Adoption

Digital information and communications
technology, particularly big data, machine learning tools, and networked
devices, have been enthusiastically adopted by individuals and businesses,
altering the texture of commercial and social relationships in profound ways.
It is clear that these new tools are becoming a very important part of modern
society and a significant factor to consider when developing political or
business strategies, developing new markets, or trying to solve problems. All
of
these changes have raised many legal questions, and issues around data
protection, deregulation
and
automation of public transport, and copyright
infringement have become headline news. What has received less attention until
quite recently is the use of these technologies by the state.
 

This was not so obviously relevant to the
individual or socially significant. There seems to be little attention paid to
how this use may impact on fundamental legal values. The Snowden revelations
changed that with regard to mass surveillance. However, many other aspects of
government use of information technology go unexplored. Large public databases,
obscure software systems, and creaky user interfaces are not interesting to the
average citizen or the journalists whose role it is to inform them of what
matters. 
 

There is a great deal more to the wide-scale
adoption of digital technology than meets the eye, particularly in the hidden
corners of the modern state. This is slowly becoming better understood by
professionals and the general public. In very recent years, stories have
emerged that highlight
the key role that algorithms (which are, at their heart, processes of
selection) play in shaping the world around us. Google Image Search has displayed
racial and gender biases – labeling pictures of black people as ‘gorillas’,
[7] selecting mug shots
when asked to show ‘three black teenagers
’ but selecting happier
images when asked to show ‘three white teenagers’,[8] and under-representing women in searches
for career-related keywords.[9] Facebook has been
accused of manipulating the news stories which are highlighted in individuals’
feeds
.[10] Volkswagen has admitted installing software
in its diesel cars that cheat on emissions tests, something which has led to
prosecutions for executives and fines of $4.3 billion.[11] Other car manufacturers are accused of
using similar ‘defeat devices’.[12]

Public
Sector Use and Difficulties

If this is occurring in the private sector,
what biases, errors and assumptions are causing difficulties for individual
citizens from the use of algorithmns in the public sector?

We still know very little about how
government uses these technologies and what may go wrong. Software
is now a key component in a great deal of the machinery of government – for example, social
welfare, the tax system, environmental management and regulation – and is increasingly
playing a role in policing and warfare through profiling of criminal and
terrorist suspects, the use of drones, and the deployment of autonomous weapons
systems. The issues that are buried in these systems should be a matter of
serious concern for law and lawyers as we try to protect fundamental values in
the 21st century, and we try to properly represent our clients’ interests
against systems errors that we cannot easily identify.
 

How are individuals selected for tax audit?
Why are they denied social welfare benefits? Why are they investigated by the
police? Often, this will be because of outputs from software programs,
sometimes machine learning systems that sift through masses of big data. As
Danielle Keats Citron has discussed in her writings,[13]
this development – the taking of decisions about individual rights and
entitlements based on unknown, inscrutable, and often unchallengeable systems –
raises serious challenges to the rule of law in the modern state. Similarly, as
private sector systems become the conduits for our daily lives, how many
individuals are side-lined because of unfair factors? (For example, Google
advertising seems to have a gender bias; it generally will not show executive
jobs to women.[14]
) 

The impact of error in these processes can
often arise from mundane causes and affect a small number of people, rather
than arising from the large-scale application of the current buzzwords of ‘big
data’ and ‘algorithms’, but the impact on individual lives can nonetheless be
significant. For example, easy-to-anticipate problems in transitioning from one
computerised case management system to another in Alameda County, California
led to dozens being arrested or jailed in error, with others being forced to register
as sex offenders when they should not have done so.[15]
Big data can produce problems on a much larger scale – the Australian
Centrelink social welfare compliance system has required many thousands of
citizens to needlessly prove that they were properly in receipt of benefits
because it generated far too many false positives.[16]
Overall, it is clear that the use of databases, algorithms, and big data by
government and bureaucracy has the potential to go seriously awry, with
significant negative consequences for individuals and populations.
 

These problems should not be overstated. We
are not
(yet) in a dystopia of computer control, where one’s fate is entirely pre-determined
by unchallengeable calculations. Digital technology also offers opportunities
for transparency and empowerment
,
and properly designed systems may (as noted above) help to overcome bias and
prejudice. Humans are still vey much part of the loop
in decision-making. There remains significant opportunity to influence and
manage the development of computer technology, to ensure that ethics and law
are part of the curriculum of software developers and analysts, and to regulate
as necessary. However, the development of big data, computer-assisted
decision-making, and e-regulation present serious challenges to the rule of
law, equality, and natural justice, and the poor understanding and transparency
of software development means that this requires serious attention
from those who research, teach, and practise
law.

Dr Rónán Kennedy is a Lecturer above the Bar at the School
of Law, National University of Ireland Galway.

 



[2] Nicholas Diakopoulos, ‘We
need to know the algorithms the government uses to make important decisions
about us
’ (The Conversation, 24 May 2016).

[3] Megan Garber, ‘When Algorithms Take the Stand’ The Atlantic
(30 June 2016).

[4] Wisconsin v Loomis 2016 WI 68.

[5] Tom Simonite, ‘How to
Upgrade Judges with Machine Learning’ MIT
Technology Review
(6 March 2017).

[6] Jon Fingas, ‘Parking
ticket chat bot now helps refugees claim asylum
’ (Engadget, 6 March 2017).

[7] Jana Kasperkevic, ‘Google says sorry for racist auto-tag in photo
app’ The Guardian (1 July 2015).

[8] Elle Hunt, ‘‘Three black teenagers’: anger as Google image search
shows police mugshots’ The Guardian (9 June 2016).

[9] Emily Cohn, ‘Google
Image Search Has A Gender Bias Problem
’ (The Huffington Post, 21 April 2015).

[12] Hiroko Tabuchi, ‘E.P.A.
Accuses Fiat Chrysler of Secretly Violating Emissions Standards’ New York Times (13 January 2017).

[13] Danielle Keats Citron, ‘Technological Due Process’ (2008) 85 Washington
University Law Review 1249.

[14] Julia Carpenter, ‘Google’s algorithm shows prestigious job ads to
men, but not to women. Here’s why that should worry you’ The Washington Post
(6 July 2015).

[15] Cyrus Farivar, ‘Lawyers:
New court software is so awful it’s getting people wrongly arrested
’ (Ars Technica, 12 January 2016).

[16] Christopher Knaus, ‘Centrelink
debt notices based on ‘idiotic’ faith in big data, IT expert says’ The Guardian (29 December 2016).