Detectives have been using facial recognition to solve crimes for almost as long as the camera has been in existence. It is but a logical extension of the modern crime solver’s toolkit to use artificial intelligence (AI) on the most identifiable physical feature of people, their face. Facial recognition opens enormous possibilities for law enforcement, no doubt. An image captured at the scene of a crime can now be screened against photographs of entire populations for a match within a matter of hours. Most people are not lawbreakers. Yet, the idea of being watched by devices linked to vast databases far out of sight makes liberal societies uneasy, even those which have already yielded their fingerprints and iris scans to official and commercial data gatherers. It’s just too creepy, complain some, and civil liberty activists in the West consider it an invasion of privacy that is simply unacceptable. San Francisco, for instance, has banned its police from using facial recognition. The intrusion that is causing alarm, however, has nothing to do with the technology itself, and everything to do with the all-pervasive surveillance it enables. Today, very few of our public spaces are hidden from cameras, some of which hover over us in the air. While the benefits of the technology are well-touted, should there be no rules governing it?
How accurately faces are identified by machines is a major point of concern. Deployed in law enforcement, false matches could possibly result in miscarriage of justice. Even a low rate of error could mean such evidence faces judicial rejection. It is in the judiciary’s interest, all the same, to let technology aid police-work. The algorithms being used to identify individuals may have moved beyond geometric and photometric approaches to three-dimensional recognition, skin texture analysis and thermal imaging, but further advances are needed for the technology to gain reliability. First up for addressal is the criticism that facial recognition is still not smart enough to read emotions or work equally well for all racial groups. With iterative use, it will improve. It is thus a foregone conclusion that justice systems will increasingly rely on its forensic applications.
Since such tools can be put to mala fide use as well, it is imperative that we frame rules for it well in time. Rogue drones equipped with the technology, for example, should never be in a position to carry out an assassination. Nor should an unauthorized agent be able to spy on or stalk anyone. Apart from California, the European Union has also decided to exercise some caution before exposing people to it. Privacy is paramount in these jurisdictions. At the other end of the spectrum, China has placed hundreds of millions under state surveillance in public, though a few protests have erupted there, too. India, which has recently accepted privacy as a fundamental right, would do well to tilt the Western way on this. We need regulations that restrict its use to the minimum required to serve justice and ease commercial operations. For the latter, customer consent should be mandatory. There will be some overlaps. Its use at an aerobridge to board an aircraft, for example, could serve the interests of both state security and the airline, but data-sharing could risk leakage. We have given up plenty of our privacy to online operators, many of us unwittingly. But awareness is rising, and we need to come to grips with what is okay and what is not.