Face detection algorithms are prone to making errors and often show gender and racial bias
Governments should ensure that while surveillance tools are used to weed out criminals and terrorists, the data of citizens is not misused by officials
At a busy public square covered by surveillance cameras, a fleet of police cars screech to a halt. Cops rush out of their vehicles, accost a man and hurriedly cuff him. The reason: their face recognition system has identified him as a terrorist whose image the police had in their database. Even until a few years ago, one would have thought this was a scene from the HBO series Person of Interest. Not any longer. Police officials in several countries today have the technology to do this.
Some face recognition tools even use artificial intelligence (AI) to identify faces partially hidden by sunglasses, cap, beard or face masks. But these technologies are still a work in progress and prone to making mistakes. This was one reason, along with privacy concerns, that led cities like San Francisco and Oakland to ban the use of face recognition in surveillance cameras by public agencies.
“Banning the use of technologies such as facial recognition in cities such as San Francisco, Oakland, and Somerville is an excellent step forward to help fight crime. Research has shown, over and over, that facial recognition technology does not work reliably, and it is more likely to misidentify women and people of colour. Banning this technology helps the police to not waste valuable time chasing the wrong people," said Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation and technical adviser for the Freedom of the Press Foundation.
An independent review of London Metropolitan police’s face recognition based surveillance system by academics from the University of Essex in July 2019 revealed that in 81% of the cases, the system erroneously flagged people who were not wanted for anything.
Further, many of these AI systems have repeatedly shown gender and racial bias. A case in point is the 2018 study led by MIT Media Labs, which found that gender classification systems sold by IBM, Microsoft and Face++ were 34.4% more prone to make errors in case of darker-skinned females than lighter-skinned males. A May 2019 study by Georgetown Law Center on Privacy and Technology, pointed out that police are using flawed data to run facial recognition searches.
In many cases where police departments didn’t have actual images of a suspect, they turned to artist sketches, celebrity doppelgangers or software-generated images, despite several studies showing that they don’t give accurate results and can lead to false matches.
Consider this example as a case in point. Early this month at the DefCon 2019 cybersecurity conference in Las Vegas, hacker and fashion designer Kate Rose demonstrated how clothes with random vehicle license plate numbers printed on them could be used to trick automatic license plate readers (ALPRs) and inject junk data into their systems. Rose’s new clothing line-up showed that these systems are not fool-proof yet. ALPRs are computer-controlled camera systems that use image recognition software to read license numbers and catalogue them to determine a car’s whereabouts in case of a police investigation.
For those concerned about the use of face and image recognition technologies by surveillance cameras, Rose’s clothing lineup can come in handy. Researchers at Pittsburgh’s Carnegie Mellon University, too, have developed a unique set of eyeglasses with large frames that can obscure about 6.5% of the pixels in any image of a face and managed to fool face recognition tools by Face++.
A joint research from 2018 involving Fudan University in China, the Chinese University of Hong Kong and Indiana University showed how a simple sports cap fitted with tiny infrared LEDs can be used to fool computer vision by projecting dots of light onto the wearer’s face in 70% of the test runs. The light dots were invisible to human eyes.
Hyphen Labs is building anti-surveillance clothing called HyperFace, which uses false faces based on ideal algorithmic representations of a human face to fool cameras. It works by reducing the confidence score of the true face by redirecting more attention to the nearby false face regions.
Then there is CV Dazzle, which shows how unique hairstyling and makeup can be used to disrupt facial symmetry and provide camouflage against surveillance cameras. They claim how by using light colours on dark skin and dark colours on light skin, partially obscuring eyes using hair, and obscuring elliptical shape head, anyone can make themselves unrecognizable to face detection algorithms.
On their part, governments should ensure that while surveillance tools are used to weed out criminals and terrorists, the data of citizens is not misused by officials for personal or political vendetta. This is easier said than done.