Home / Opinion / Columns /  Facial recognition technology has shrugged off protests

It is exceedingly rare for Big Tech players to police themselves. Most self-policing happens in the face of either governmental or activist objections to certain types of technology, which then make tech firms either back down partially or create a public relations spectacle out of doing so. But what is paradoxical is that some departments of national governments that are forcing Big Tech to back off from certain applications are sometimes completely at odds with sister departments that are the largest customers of the same applications.

As the pandemic was raging in 2020, some Big Tech players made virtuous noises in public about backing down from facial recognition software for the ‘public good’. Amazon, IBM and Microsoft all said that they were either axeing their programmes or placing ‘holds’ on police departments using their facial recognition algorithms. According to a letter to US legislators from IBM’s chief executive officer Arvind Krishna, the company had chosen to abandon general purpose and analysis software for facial recognition. His letter stated that his firm did “not condone uses of any technology… for mass surveillance, racial profiling, violations of basic human rights and freedoms…." Amazon issued a one-year ban on police departments using Rekognition, its facial search technology. And Microsoft said it was waiting for new legislation to be adopted before selling its facial recognition technology to law enforcement organizations.

Facial recognition searches or comparisons generally fall into two use cases: verification and identification. Verification (or one-to-one searches) compares a stored photo of an individual with another photo of the same individual to determine whether both are of the same person. This type of comparison can help verify the identity of an individual attempting to unlock a smartphone. Notably, Apple Inc and Alphabet Inc (Google’s parent) were absent from the conversation in 2020 because their smartphone software is inextricably tied to the verification use case.

However, identification (or one-to-many searches) compares a photo of a single individual against stored photos of several individuals to determine if there is a potential match. For example, this type of comparison can be used to identify investigative leads for an unknown individual in a crime scene photo. It is the identification use case that was of concern to IBM, Amazon and Microsoft, because of known biases (against people of colour, for instance) built into facial recognition systems. It is these biases that have been the reason for activist concerns by organizations such as the American Civil Liberties Union (ACLU).

Rekognition made news in 2018 for showing startling results on a test run by the ACLU. The organization test-scanned the faces of all 535 members of US Congress against 25,000 public mug-shots of arrested people and/or criminals. No member of Congress was in these images, but Amazon’s system generated 28 false matches, with obvious implications. At the time, Amazon reacted that the ACLU’s tests were run at its default confidence threshold of 80%, and not at the 95% level that the company recommends for law enforcement applications where false identification can have serious consequences. In 2020, such nuanced arguments about statistical validity seemed passé, with Rekognition as well as its rivals from IBM and Microsoft being pulled from the market.

Well-funded startups such as Clearview AI were also absent from the 2020 conversation. Clearview was being used by over 2,000 law enforcement agencies and firms around the world and claimed at the time to have scraped more than 3 billion photos off the internet, including from popular social media platforms. Allegedly, it retained those photos in its database even after users deleted them or took their accounts private, though this bit is uncertain.

A recent article in The New York Times detailed the use of facial recognition by immigration officials at US airports. Such use was evidently baked into a 2017 order from then President Donald Trump banning visitors from some Muslim countries. The ban received much attention, but not so facial recognition mandates. And now, according to the article, 80% of entrants pass through facial recognition. Given all the noise in 2020, this was startling, so I decided to dig deeper. A report (bit.ly/3hEaVRF) published by the US Government Accountability Office (GAO) details how federal agencies use and plan to expand their use of facial recognition systems. Ten of 24 agencies surveyed plan to broaden their use by 2023. Ten agencies are also investing in research and development for facial recognition. The report was the outcome of a study requested by the US Congress on federal agencies’ use of facial recognition in US fiscal year 2020. Some federal agencies that use facial recognition fell outside the scope of this report, so there is no comprehensive survey on US government use of the technology. However, the GAO report is certainly indicative of overall trends. It says the use of the technology is “increasingly common."

Most agencies surveyed using it for physical security, cybersecurity, or domestic law enforcement; 18 of the 24 (75%) of US federal governmental agencies surveyed currently use some form of facial recognition, with many owning more than one system. The report says most of the systems in use by those surveyed are government-owned, though six systems come from commercial vendors, including Vigilant Solutions, Acuant FaceID, and. of course, Clearview AI.

Siddharth Pai is co-founder of Siana Capital, a venture fund manager.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less

Recommended For You

Trending Stocks

×
Get alerts on WhatsApp
Set Preferences My ReadsWatchlistFeedbackRedeem a Gift CardLogout