Biometric Technology and Privacy Issues

February 9, 2007

Biometrics technologies which record unique physical features in digitised form are now on the cusp of widespread commercial availability and are set to make keys, photocards and PINs a thing of the past. It will not be long before fingerprint readers start to replace microchip-enabled photocards for accessing commercial premises, iris scanners appear at UK airports and facial recognition technologies become commonplace for monitoring security at football grounds.

At present, much of the technology is still developmental, but biometric applications are proliferating fast and some of the more basic systems are already on the market. The US Visa Waiver Program, to which the UK is a signatory, demands that all passports contain a machine readable chip with the passport holder’s details and a biometric identifier such as a digital photograph. Accordingly, all new UK passports must now incorporate biometric data in the form of an iris or fingerprint scan in an embedded microchip, and by the summer 10 UK airports will be using iris scanning technology, with a planned roll-out for all 141 UK ports.

The private sector too is starting to take advantage of the new technology, as the commercial benefits of biometrics become more tangible. The Pictet and Cie Swiss bank in Geneva, for example, is already using iris scanning technology to control employee access to its offices. Several European casinos have installed facial recognition technology to identify unwanted or banned customers, and even gambling addicts who want to be stopped when temptation gets the better of them.

The advantages to companies of using biometric systems are clear:
• they have an extremely high degree of reliability because it is impossible (short of amputation or mutilation) to lose or forget biometric traits, and very difficult to copy, distribute or misuse them;
• they could not be simpler to use, because individuals do not need to remember passwords or PINs;
• system integrity is virtually guaranteed because no two people share the same biometric traits and it is nigh on impossible to reproduce them, which in turn reduces the potential for fraud and enhances security.

The use of biometrics does carry with it important legal implications, however. Most significantly, biometrics pose new and complex questions about compatibility with individuals’ privacy. Companies need to be sure that any biometric system they propose to introduce will not fall foul of data protection or human rights laws.
In simple terms, biometric systems are based either on ‘verification’ or ‘identification’, and are either voluntary or compulsory. The privacy implications vary considerably in each case. Verification systems operate by verifying that a person is in fact the person he or she claims to be. At its most basic, this means using a fingerprint biometric to verify that a person seeking access to a building or bank account is authorised to do so. Identification systems go a great deal further. Identification systems compare information about one person, with information about many held on a database, for the purpose of identifying who that person is (known as a ‘one-to-many’ match). An example of an identification system is the National DNA Database (NDNAD), which is run by the Forensic Science Service to identify offenders and victims using crime scene DNA samples. When the NDNAD was established in 1995, the taking of samples was closely controlled and quite limited. Now, DNA samples can be taken from anyone arrested and detained by the police in custody, and non-intimate samples (such as a mouth swab) can be taken without consent. DNA records can be retained even if the arrested person is cleared, indeed even if they are never prosecuted. Importantly (and in some cases controversially), these developments have only been made possible by specific legislative changes.  

Legal Limits

The use of any biometric system must comply with the European Convention on Human Rights, and with the Data Protection Directive. In the UK, these laws take the form of the Human Rights Act and the Data Protection Act.

The Human Rights Act states that we are all entitled to respect for our private life, and interference with this by government is only permitted in specific circumstances. The courts have made it very clear that ‘private life’ does not just apply to life outside work. It will also apply in the workplace.

The Data Protection Act regulates the way that organisations process information which identifies us. It requires, for example, that the use of such information (including biometric data) must be ‘fair’, and normally such use must also be limited to purposes which were notified to the individual at the time he or she first handed over personal data.

In the context of biometric technologies, two overriding principles will apply in every case. These are the principles of ‘proportionality’ and ‘transparency’. Proportionality requires that interference with someone’s private life, or the use of his or her biometric data, must be justifiable by the benefits of the scheme. This usually means balancing the rights of the individual with the rights of the organisation, or the public at large. Transparency means making it clear how and why information will be used, and not going beyond this without prior agreement.

In legal terms, biometric data is no more intrinsically ‘private’ than any other personal data. However, the law still requires that the purpose of a biometric scheme be clear from the outset and the use of biometric information be proportionate to the benefits which the scheme is likely to offer. Companies planning to roll out biometric systems will need to think carefully about how they collect information, how they store it and how and when it can be accessed or matched. In particular, for example, many individuals would reasonably be concerned if biometric data were to be used by companies for commercial gain. There are also some complex legal issues which will arise if biometric data is shared or transmitted, particularly if it will be transferred outside Europe.

Practical Steps towards Avoiding Illegality

In practice, companies will need to establish very clearly whether a biometric scheme is voluntary or compulsory (and what the consequences would be if an employee refused to participate in a ‘voluntary’ scheme), whether the scheme operates by means of verification or identification, and whether the use of biometric information is compatible with the purposes of the scheme. There is also the issue of function creep – ie whether different uses of information may emerge in the future which were not contemplated when the scheme was first set up.

Companies will need to consider what methods they will have to put in place to ensure the security of any biometric information which they hold and the cost of implementing these measures.

Finally, and perhaps most importantly, companies should consider how they will allay users’ concerns that use of their biometric data will somehow infringe their rights to privacy and enable fraudsters to use their data to commit crimes or steal their identities. This is perhaps the biggest obstacle to overcome as the biometric hardware put in place will be successful only if users are willing to provide their data. It is arguable that one of the reasons why the use of biometric technologies has not been as extensive as one might imagine is that Big Brother connotations have had a major impact on public perception.

Biometric technologies are likely to play a major role in the development of commercial security over the coming years, but it is imperative for companies to think through the legal issues first, or risk falling foul of increasingly complex and penal legislation.

Marcus Turle is a partner in the Technology Law Group of City law firm Field Fisher Waterhouse LLP.