Toggle Navigation
Digital Imaging: Imagine the Possibilities
Teledyne Logo
hidden

Facial Recognition, Part II: Processing and Bias

Looking for Truth and Finding Problems

As we saw in Part I, there are several reasons to look to facial recognition as a way to identify individuals. First, it’s how humans operate. Apart from speech, face analysis is certainly the first and major biometric cue used by humans and therefore critical that it is accurately studied. A visual preference for faces and the capacity for rapid face recognition are present at birth. The ability to recognize human faces typically develops during the first 6 months of life.

Facial recognition also has advantages for vision applications, with accuracy and non-intrusiveness making it the clear choice when attempting to identify subjects in video and surveillance camera footage. But don’t let the police procedurals fool you; facial recognition from video sequences and live footage still presents  many challenges. Even small variations in lighting, angle, image noise, frame rate, and resolution, make real-time recognition very difficult. The fact that no human face is constant, gives a sense of how much more difficult it is to recognize a human face over time as opposed to, say, a can of soup.

“Face recognition is among the most challenging techniques for personal identity verification.”

Recognition of Human Faces: From Biological to Artificial Vision, Tistarelli et al. (source)

Even in humans, facial recognition involves many hidden mechanisms which are yet to be discovered. In contrast to earlier hypotheses of how the brain “sees,” face perception only rarely seems to involve a single, well-defined area of the brain. It seems that the traditional “face area” of the brain is only responsible for general shape analysis, but not face recognition. In fact, according to the most recent neurophysiological studies, the use of dynamic information is extremely important for humans in visual perception of biological forms and motion. This dynamic information not only helps identify one face from the other, it can also tell the brain which parts of the face are most relevant to inform recognition.

Faces are difficult for everyone

Computer vision researchers have had the same experience, where face recognition cannot be considered a single, monolithic process. Instead, several representations must be devised into a multi-layered architecture. One way that multi-layer face processing could function comes from researchers at the University of Sassari,  where the proposed architecture divides the face perception process into two main layers. The first is, like in the human brain, devoted to the extraction of basic facial features and the second layer processing more changeable facial features such as lip movements and expressions.

Figure 1: A model of the distributed neural system for face perception [source]

It is worth noting that the encoding of changeable features of the face also captures some behavioral features of the subject — how the facial traits are changed according to a specific task or emotion. Human brains do this as well, while also adding social context and personal history to help aid in the recognition of both faces and attendant emotions.

A setback: racialized recognition
But just as we are developing multi-level facial recognition systems that emulate the successful parts of human perception, scientists are running into other challenges that seem all too human as well: racial bias. We shouldn’t be surprised. Ground-breaking technologies seem to go hand-in-hand with unforeseen consequences. With something as potentially powerful and useful as facial recognition, it is important to interrogate our assumptions before relying on any system. In an earlier time, photographic film chemistry was initially biased toward resolving the colors of Caucasian skin tones. In the late 2000s, it was found that some mainstream facial recognition systems wouldn’t acknowledge people with dark skin tones. In both cases, public and industry pressure ensured that fixes were introduced.

In their 2018 paper, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” Joy Buolamwini, a researcher at MIT Media Lab, and Timnit Gebru, a postdoc at Microsoft, looked at how machine learning algorithms can discriminate based on classes like race and gender. They looked at face recognition technologies from Microsoft, IBM, and Face++ and matched them with the Pilot Parliaments Benchmark (PPB). The dataset comprised 1,270 male and female parliamentarians from Rwanda, Senegal, South Africa, Iceland, Finland, and Sweden.

They used this new dataset because other datasets such as the IJB-A, used for a facial recognition competition from the United States’ National Institute of Standards and Technology (NIST), and Adience, which is used for gender and age classification, were both overwhelmingly skewed toward people with lighter skin. Once again, the datasets themselves can obscure and even undermine the value of the algorithms being tested.

The authors argue for greater transparency in how these algorithms are developed, asking companies to provide information on the demographic and phenotypic images used to train AI models as well as reporting performance levels for each different subgroup.

Infrared imaging is accurate, and hopefully more colorblind

Another option is infrared imaging, which is not as sensitive to the skin tones that people seem to think are so important. Early reviews of the iPhone X praised the phone’s ability to detect and recognize faces, independent of race or skin color. Face ID on the iPhone X works by projecting and analyzes more than 30,000 infrared projections to create a precise (and less-biased) depth map of your face.

This is how Apple represents the very complex calculations behind 3D facial recognition.

Infrared cameras come in many different varieties and capabilities now, from NIR imaging which is very close to the visible range, to more exotic MWIR and LWIR thermal versions that can detect heat even through complex visual environments like smoke and fog. The potential applications are broad, covering the gamut from commercial to government and military, from tagging faces in social networks to crowd surveillance and sophisticated homeland security operations.

The other intriguing factor in using infrared imaging for facial recognition (especially thermal infrared) is night-time surveillance where there is little or no light to illuminate faces. Today thermal facial recognition is used in many applications, most notably covert military applications, allowing for covert data acquisition. Choosing infrared makes the system less dependent on external light sources and more robust with respect to incident angle and light variations. For the same reason, your iPhone X can identify you in the darkness of a club or campsite.

Other clever uses of infrared imaging data lead to completely new solutions, such as “Face Recognition and Drunk Classification Using Infrared Face Images” by Chilean researchers. By starting with a simple observation, they were able to create a system that can say reliably “yes, that’s you, and yes, you shouldn’t have another one.”

Facial recognition is playing in the very biggest fields
The stakes are high. The authors of previous studies point out that facial recognition software is “very likely” to be used for identifying criminal suspects. Government security services want to collect and use as much data as they can. Huge consumer companies like Apple, Google, and Amazon are looking into how to use facial recognition at every turn in our digital lives and are making acquisitions to move their plans forward.