Follow us

Authors: Anna Jaffe and Audrey Vong

In the past six months, the global furore over public as well as private sector use of facial recognition technology has only increased. However, although this debate is increasing in volume, it is arguably not expanding in scope.

The current regulatory approach focuses too narrowly on specific aspects of the technology being regulated, rather than the manner in which it is used and the outcomes achieved by it.  Not only does this limit the effectiveness of potential responses by regulators, but this also makes it difficult for innovators to truly plan for and ‘future-proof’ the implementation of such technology. This inability to fully grasp the significance of the issue at hand increases the risk of losing a real and present opportunity to set a clear, coherent and comprehensive standard for responding to the challenges of new technologies.

In the past month alone, the New York Times reported that an American start-up, Clearview AI, claimed extensive relationships with numerous North American law enforcement agencies for the purposes of using its exclusive personal identification service, where an officer can upload a person’s photo and, in real time, obtain access to matching photos scraped from social media and other websites. Although subsequent investigations have revealed that Clearview’s claims about both its relationship with law enforcement bodies and technical capacities may be inaccurate, this is only one example of the almost-daily headlines about the use of technology by governments and other private sector entities in order to identify individuals, either in isolation or as part of a crowd (or both). These headlines have been met with proposals to introduce various restrictions on the use of such technology — ranging from self-regulatory principles and frameworks to outright bans — but coordinated global consensus has not yet emerged.

What is clear from the steps that have been taken to date is that much of the discussion around the use of facial recognition technology, and the corresponding proposals for its regulation, has been reactive in nature and accordingly far too narrow in approach.

So what do we talk about when we talk about ‘facial recognition’? At the outset, many proposals focus specifically (and in some cases exclusively) on the use of facial images for identity matching purposes, but facial images constitute only one type of data that can be used for this purpose. Significant advances in storage and analytics technology mean that the collection and use of biometric information is growing not only in variety but also in volume. ‘Biometric information’ in this context includes facial images but can also range from fingerprints and iris patterns to other identifiers less discernible to the naked eye, yet just as unique to an individual (for example, in patterns of human movement), and are both universal as well as easily and publicly accessible. Despite this, references abound to ‘facial recognition’ in isolation.

This narrow focus is also clear in the way that current proposals for regulatory reform are targeted at what the relevant technology is, rather than what it does or could do in each case. This leads to two separate, but equally problematic, outcomes. The first is that too little emphasis is placed on whether each such technology is effective, fit for purpose or accompanied by appropriate safeguards. Concerns about the accuracy of, and potential for bias in, biometric identification technologies were raised by Australia’s bipartisan Parliamentary Joint Committee on Intelligence and Security in its rejection of proposed Australian legislation relating to the use of identity-matching services (particularly in relation to facial images) in late 2019. These have been reinforced by findings of the US National Institute of Standards and Technology that facial recognition algorithms used by law enforcement are at least 100 times more likely to misidentify Asian- and African-Americans and Native Americans than white men. Similar concerns have been raised in reporting on Clearview’s facial matching technology.

This narrow focus on specific technologies also leaves little room to consider the broader contexts in which such technologies (and the data collected by them) will be used. Biometric information is itself unique because it is easily accessed and collected, potentially even without the awareness of the subject of that information, and difficult to alter and conceal. However, the true power of this information may only truly be realised when it is combined with other information collected about that person, creating a detailed, highly individualised and targeted portrait of an individual (out of the ‘mosaic effect’ of combined data points) that is capable of almost-literally following that person around in their day-to-day life.

Ultimately, if this overly narrow or fragmented approach to these technologies continues, it is likely that these technologies — and their uptake by both the public and private sector —  will only continue to expand to fill these unregulated spaces.

Please click here to read the full briefing.

See here our Technology, Media and Telecommunications hub.

 

Related categories

Key contacts

Paul Butcher photo

Paul Butcher

Director of Public Policy, London

Paul Butcher
Andrew Lidbetter photo

Andrew Lidbetter

Consultant, London

Andrew Lidbetter