Private Images: Should We Really Be Concerned about Our Privacy in a World with Facial Recognition?

Updated: 19 July 2021
Updated: 19 July 2021

Fact-checked by

The human face is something we are programmed to recognize; the face has an evolutionary history that reflects how our species has changed over time. But now this essence of humanity, our face, has entered a new era, one of commoditization and control of systems based on its morphology: in other words, facial recognition.

But is facial recognition, a digital dystopia or part of a natural digital evolution that we should all accept?

Should We Be Concerned About Facial Recognition

The Current State of Play in Facial Recognition

Our face is biometric data. As such, it requires special attention with regards to privacy – this view is backed up by legislation including GDPR and HIPAA as well as many similar privacy laws. GDPR, Article 9, places biometric data into the special category of personal data. Article 4(14) specifically calls out facial data as biometric and specifies “relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person”.

Still, privacy considerations aside, facial recognition as a technology (FRT) is booming. Figures vary, but all see a growth in the market for FRT. Mordor Intelligence predicts a market value of facial recognition technology and services of $10.19 billion by 2025, growing at a CAGR of 12.5%.

Examples of FRT application across industry sectors shows its potential wide-scope use:

Official Law enforcement

An obvious use of FRT is in law enforcement. Images of dystopian novels spring to mind, but the use of facial recognition in policing is here and now. Facial recognition is used in many parts of the world for law enforcement, including Europe, the UK, and China.

The EU Commission is funding a project that will look at facial image sharing exchange across EU borders. Europol is looking to become a global hub for biometric data sharing.

Georgetown Law, Center on Privacy and Technology report into Unregulated Police Face Recognition In America found that 50% of Americans are on some or other law enforcement face recognition system. The report’s concern is that the databases holding these biometric data are largely unregulated. Recommendations from the report include better legislature to only allow searches based on a “reasonable suspicion standard” and that anyone found innocent should have their facial data removed from a database.

Border control

The use of facial recognition at borders is convenient and efficient. It is widely adopted in Europe, for example. In Australia, the government is looking to use biometric data to automate 90% of passenger border checks by 2020. Hong Kong and Singapore airports are also examples of the use of FRT for expediting passengers.

Schools and universities

China recently implemented facial recognition in schools. The technology was used for general safety reasons but was also found to be used to track students and even analyze attentiveness in class. After some public push back on the creepy nature of such FRT application, China is now pulling the technology out of general school use.

Verification and authorization

As digital identity becomes ubiquitous, biometric data associated with a person’s digital persona are being associated. For example, face biometric data is being used for age verification. Biometric data is also used in transaction authorization and authentication. The data has the potential to become a large critical infrastructure in its own right as the backbone of our digital persona’s.

Social media

Social media is perhaps one of the most lateral uses of facial recognition, many of us using it without even thinking about it. The tag suggestions setting in Facebook, for example, allowed you to tag friends and facial recognition matched images of a person across the platform. Privacy pressure has since resulted in Facebook offering greater management of this setting. However, our faces remain the theme across most social platforms.

Retailers

A number of companies are offering FRT for the retail sector. The use of the technology in this context is to detect and deter shoplifters and/or to prevent certain types of return fraud.

Whilst there are obvious and positive uses of facial recognition technology, the question we must always ask is: what about the privacy implications? Facial biometric data, is, after all, our face on the world and one that arguably identifies us in ways that other personal data does not.

Facial Recognition: The Risks and Trade-offs

The risk list for facial recognition privacy is long; some of the contenders for biometric privacy risk of the year are:

Data breach

Biometric data, like any data, is at risk from cybercrime as well as privacy violations; the two are often intrinsically linked. In 2019, Suprema which supplies government and financial sector clients suffered a breach impacting almost 28 million data records, many containing biometric (face and fingerprint) data.

Misuse by authorities

Perhaps one of the most concerning area of privacy violations associated with facial biometrics is at the state level. The tracking of ethnic minorities in China is an obvious example of how FRT can be misused. Human Rights Watch (HRW) described China’s use of FRT as “Algorithms of Suppression”. HRW describes how millions of ethnic minorities in China are being held in camps, their movements tracked using FRT.

In other parts of the world, citizens are showing concern over the implementation of facial recognition by authorities. In the USA, several states have now banned the use of FRT. In California, students have protested about the use of facial recognition in universities for security purposes.

Safety (e.g. stalking and misuse)

The facial recognition apps such as Clearview AI and before that the Russian Findface app, have been called out as a potential tool for stalkers. Clearview AI can be used to take a photo of a person in the street and quickly get back personal data including address. Interestingly, the company behind Findface, closed down the consumer app and turned their talents to a Russian surveillance system that constantly scans citizens’ faces.

Ownership of facial data

Actual ownership of facial images can come into question and be used to create a synthetic ID, fake social media accounts, and even as a feed for a deepfake scam. I have, myself, been a victim of a fake social media account scam. I had images of my face taken from another account and used to setup a new account, pretending to me. The point was to extort money from me to take down the fake account – it didn’t work. Whilst this scam does not specifically use FRT, the wide-spread use of cheap FRT-based apps could propagate crimes of this nature.

Deepfakes can potentially use facial recognition to power cybercrime by using stolen face biometrics in scams.

Overuse

Just because you can doesn’t mean you should. There is a general overuse of technology for technology sake in the world at present. An example of this in the case of facial recognition, is perhaps exemplified ‘data overuse’ in the case of Google’s Nest Hub Max. Nest Hub uses the company’s Face Match technology to ‘improve product experience’. The question we need to ask is, are we prepared to share these data for an (hopefully) improved user experience.

We come back, time and again, to the idea of a usability vs. privacy trade-off which we have been dealing with in the world of security for many years.

Face-Off for Facial Recognition and Privacy

We are, it seems, at a juncture in the history of technology. We can’t get enough of the latest gadgets and facial recognition is needed to power many AI-based devices and apps. But like many good things, it comes at a cost. Are we prepared to do a trade-off between privacy and technology? Are we happy to take the risks that are inherent in sharing our face, the deepest root in our humanity? I myself am torn. I am a gadget fan, but privacy and anonymity (in certain circumstances) are important to me. Is privacy important to you?

Written by: Susan Morrow

Cybersecurity specialist

Susan is an ex-chemist who transitioned to the IT-security sector in the early 1990s where she became a founder of a cybersecurity start-up. Since then, she has built a knowledgebase across diverse areas including encryption, digital rights management, digital signatures, privacy, and online identity.

Susan has been involved in identity projects addressing government, enterprise, and consumer needs. She has helped design and commercialize award-winning software solutions used by organizations of all sizes, worldwide.

Currently, Susan works as Head of R&D at Avoco Identity who specialize in data orchestration solutions to facilitate the identity ecosystem.

Susan is also a tech writer and has a regular blog about digital identity at CSOOnline:

https://www.csoonline.com/author/Susan-Morrow/

In 2020, Susan was shortlisted for a top 100 Women in Tech award and included in the long list of the most influential women in technology in the UK by Computer Weekly magazine.

Her mantra is design for a digital life, not just digital identity.

Leave a Reply

Your email address will not be published.