Facial recognition technology: Supporting a sustainable lockdown exit strategy?

May 10, 2020

Technology has played a dominant role during the lockdown and will be a key aspect of ensuring the transition back to normality is successful. This article discusses recent trends, particularly in Ireland, Denmark and China, regarding the adoption of facial recognition technology (FRT) as a result of the COVID-19 pandemic. We look in more detail at some of the pre-pandemic use cases for, and concerns about, FRT, and consider the key aspects of data protection law when adopting such technology solutions. 

Exit through technology

Age, geographic, sector and other forms of segmentation and social distancing will become the longer-term norm as countries transition out of lockdown. Use of key measures such as technology-enabled contact tracing are playing an important part, particularly in jurisdictions at a more advanced stage of transition from lockdown. It has been suggested that a 60% rate of adoption of tracing app usage could end the epidemic. This requires the population to suspend concerns about privacy for a greater common good; to trust that the benefits outweigh the risks and that the technology is designed and used in a way that strikes the appropriate balance. 

During lockdown and as we move out of lockdown, certain essential services and sectors such as medical device and food manufacturing, telecoms and core banking services have remained operational or will gradually expand before others. 

In Ireland, a large food producer has put an FRT solution in live use as part of staff protection measures, to avoid staff needing to sign in manually at the start and completion of their shifts. There are already signs of a move to germless and contactless security and access control systems. One Chinese company has confirmed its masked facial recognition program is at a 95% accuracy rate and noted a surge in requests for technology at entrances to premises. At the moment, these are predominantly from hospitals at the centre of the outbreak in China wanting to ensure that nurses wearing masks, who needed access confirmed at a distance, are admitted to work. And the technology is advancing: facial recognition technology can be connected to a temperature sensor, measuring subject’s body temperature while also identifying their face and name.

Companies looking to adopt such technology need to consider the restrictions and balances set out in the existing legal framework, taking into account, in particular, that what is necessary in a lockdown scenario may not be necessary when relative normality returns. An interesting position taken by some of the Asia privacy regulators is that, according to the Universal Declaration of Human Rights, the right to life is an absolute right, whereas the right to privacy is a qualified right. They are, therefore, looking at privacy considerations in connection with the pandemic through that lens. In this article, we take a closer look at some of the more established use cases for facial recognition technology and how those checks and balances apply to them.

What is facial recognition technology?  

Facial recognition was first researched in the mid-1960s by Woodrow Blesdoe and Helen Chan, who used computer programming to match a large database of mugshots with a photograph. Their method involved manual extracting of features from photographs and later inputting them into a computer system that compared the pictures. Funded  by an unnamed intelligence agency, the project was never widely published and was limited by technological constraints. Fast-forward 50 years, and FRT offers great opportunity, effective solutions and some high-profile challenges.

Mass surveillance concerns, perceived invasion of privacy, inherent bias risk and general lack of understanding of the use cases for the technology are among the main social and political concerns in relation to FRT. Much of the concerns relate to trust. In May 2019, San Francisco became the first US city to ban the use of FRT by any local agencies, including law enforcement. Other cities followed suit in July, and in October 2019 California introduced a state-wide ban on using FRT on police body-worn cameras.

Californians cited, as their most common concerns, mass surveillance, invasion of privacy and inherent systematic bias that may disadvantage minority groups. Various communities in California, including LGBTQ and Muslim, have reportedly been subject to local government profiling, so there is an underlying lack of trust of what is seen as law enforcement agencies adding yet another tool to their surveillance arsenal. In March 2020, the Washington state legislature passed a public sector facial recognition privacy bill that imposes extensive restrictions and conditions on government use of FRT, the effect of which is likely to slow deployment of the technology in the state.

And yet, the benefits of the technology continue to drive an increase in the rate adoption of FRT solutions internationally. 

In turn, legislators and regulators are increasingly being required to consider application of existing laws, in particular data protection law, to use of FRT. 

Use of facial recognition technology can, but won’t always, involve the collection of biometric data. It must allow for the unique identification of an individual. Under Article 4(14) of the General Data Protection Regulation (GDPR), biometric data is defined as:

“Personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic (fingerprint) data”. 

FRT offers a range of use cases, but can be broadly categorised as:

– Verification FRT: holds one piece of biometric information against which it compares various samples that are presented.

Smartphones

The most widespread use case of verification FRT is to be found in smartphone functionality allowing access through verification of facial features. After the first scanning, the phone holds the biometric data of the user’s face and checks whether what is presented to it while trying to unlock the phone matches the data it holds.

– Identification FRT: checks samples against all biometric references in its systems and identifies an unknown person. More advanced forms of Identification FRT include real-time scanning and analysis known as live or automated facial recognition, the lawfulness of which has been questioned by academics in the Met Police trial (discussed below) and before the courts in the South Wales case, but which may be more widely accepted in other parts of the world, such as Asia. Consent, legal basis and transparency were some issues that emerged in the course of judicial and academic scrutiny regarding the commercial application of Identification FRT. 

It is noticeable that in parts of the world where explicit consent is relied on to legitimise data collection, including biometric data collection, Identification FRT is more widely accepted and commonplace (with compliant safeguards in place).

South Wales Police: Technology use case

The mechanics of the Identification FRT as described in the case:

  • Creating a watchlist: A database of images against which the live facial recognition (LFR) images were going to be compared would be compiled from the police database created in the course of the normal policing activities (mainly custody photographs). The facial features extracted from the images were then turned into numerical values. The watchlist included persons wanted on warrants, individuals who had escaped custody, persons suspected of committing crimes, and missing persons. Including a person on a watchlist was not based on a suspicion that the individual might be present in the area of deployment.
  • Acquiring a facial image: CCTV would capture a moving image when the person was in the camera’s field of view. 
  • Face detection: the software would detect human faces and isolate the individual. 
  • Feature extraction: the software extracted the unique facial features from the image of each face. 
  • Face comparison: extracted facial features were compared to those held on the watchlist. 
  • Matching: while matching, the software would create a “similarity score” – a numerical value indicating the likelihood that the images match. The threshold could be set at a desired level to indicate when the match is found. Matches were then reviewed by a police officer to ensure accuracy. If no match was made, there was no further action. Where there was a match, intervention officers were engaged and only intervened if satisfied that the match was in fact a suspect.

Not all FRT use-cases are so intrusive, for example, FRT that can detect the presence of a face but does not determine who the face belongs to. An example of this is the technology in a smartphone able to detect how many people are in a photo by showing a square around their faces. Indeed, in some parts of the world (such as Hong Kong and in public spaces in Singapore), this form of data collection might not even constitute personal data collection if the intention is not to identify data subjects. 

The benefits of the technology are clear from some of the early adopters. Car manufacturers, such as Ford, are working with tech companies to install facial recognition in their vehicles. The FRT will learn to recognise the primary driver and other regular drivers such as family members, and the car settings will adjust depending on who is sitting in the driving seat. Banking apps use FRT as a way to increase security when logging in or to authorise payments. Facebook has been using FRT since 2014 when it launched its DeepFace program. Its FRT is able to suggest tagging a Facebook user based on them being previously tagged in other photos.

As noted above, the uptake of FRT has been particularly high in China and other parts of Asia where consumers can – and routinely do – scan their faces to pay for groceries or withdraw money, which – provided explicit consent has been obtained, and other safeguards are in place such as avoiding excessive processing, restrictions on sharing, data security – is seen as a convenience and a preferred personalised service rather than a practice raising concerns. This has in turn led to wide acceptance of FRT during the lockdown as part of prevention and control measures against the virus, and we expect it will continue to be a widely accepted measure as China and other parts of Asia get back to business as the lockdown ends. 

The  primary issues to consider are the extent to which the law and regulation ought to be adapted to address the sometimes valid concerns regarding trust, and the point at which the regulation dilutes the social benefits of FRT innovation. 

Current EU legal framework

GDPR: Key principles 

In Europe, the data protection law relevant to FRT is found mainly in GDPR. The diagram below identifies the typical data controllers/processor dynamic when a FRT solution is being used.

gdpr and facial recognition data controller or processor graphic

Under Article 9(1) GDPR, biometric data constitutes “special category data,” processing of which is generally prohibited unless one of several specific exceptions applies.  

Not all data collected using FRT will be classified as special category data: Article 9(1) GDPR specifies that biometric data will be considered a special category data only when it is used to uniquely identify someone. If, for example, FRT is used to detect whether a customer is male or a female, it will not necessarily uniquely identify an individual and could therefore fall outside the scope of special category data. Another instance would be when digital photographs of individuals are processed and the image data is not further used (for example, to create a digital profile).  Importantly, Article 9 is one of the GDPR provisions that left a fair degree of latitude to Member States to legislate further at local level. Accordingly, this aspect of GDPR is less harmonised across Europe and presents a further challenge to businesses seeking to roll out FRT solutions designed to capture special category data on a pan-European basis. 

While processing personal data, the core principles of data protection from Article 5 must be adhered to. Some of the principles key to FRT include:

Purpose limitation

The data must be collected for a specified, explicit and legitimate purpose that is defined at the time the personal data is collected. 

Transparency 

Ensuring clarity of purpose enables compliance with the transparency principle, which further obliges the data controller to provide data subjects with information regarding the processing of their data in a clear, concise and comprehensible format, in the form of a fair-processing notice. Delivery of adequate notices can pose a significant challenge regarding FRT use. For example, individuals may already be in the vicinity of an FRT-enabled camera by the time they’re aware of signage. 

Data minimisation

The data minimisation principle requires data controllers to collect the minimum amount of data required for the defined purposes. Further, the processing must be balanced against the rights of the data subject. 

Data security

Appropriate technical and organisation security measures should be in place to ensure the security of data obtained from FRT solutions. Defining what is appropriate requires an assessment of various measures including the nature, scope, context, and purposes of the processing, and the risk of varying likelihood and severity for the rights and freedoms of natural persons. In most FRT solutions, meeting this standard will require a significant investment in security

No automated decision-making 

Article 22 gives data subjects the right not to be subject to decisions based solely on automated processing – i.e. without human intervention. It is, however, permitted with express consent, if expressly authorised by law, or for reasons of substantial public interest.

When processing special category of data, an Article 9 exemption must be satisfied in addition to one of the lawful bases from Article 6. The main bases for processing biometric data include those of public interest, legitimate interest, and consent, each of which comes with limitations and challenges.

It is therefore necessary to understand in detail the functionality of the technology and the scope of the proposed use case before drawing any conclusion as to its legality. A good illustration of this is in the South Wales Police case.

In R (Bridges) v Chief Constable of the South Wales Police

The lawful use of live facial recognition (LFR) has been tested in the UK, reaching the High Court in September 2019, when a civil liberties campaigner brought a case against South Wales Police’s use of LFR in 2017 and 2018. The court held that it was lawful for the South Wales Police to use LFR, though the case is being taken to the Court of Appeal.

The claimant challenged the general lawfulness of the LFR use and the adequacy of the legal framework relating to LFR. The LFR use was challenged on three grounds:

  • Human rights: it was claimed that the use of LFR interfered with Article 8.(1) of the European Convention on Human Rights.

  • Public law: it was claimed that South Wales Police failed to comply with its public sector duty under the UK Equality Act 2010, s. 149(1).

  • Data protection: The main point of contention related to what constituted personal data. South Wales Police argued that the only personal data it was processing pertained to those people on the watchlist. The court applied a concept of individuation, defined by the UK Data Protection Act 1998, which stated that personal data is data of a person who can be identified by one or more factors specific to