Black Mirror, as its makers aptly explain, “explores a twisted, high-tech, near-future where humanity’s greatest innovations and darkest instincts collide”and is perhaps the most mind-bending television series of all time (Warning: spoilers ahead).
I started the fourth season of the series with an episode called “Metalhead”. The “Metalhead” in the title is an autonomous robotic dog, armed with a wide range of barbarous tools—from embedded guns in its limbs, to scanners that it uses to track down human beings.
The episode involves one of the Metalheads ripping the heads off two protagonists while the second chases down, Bella. Bella and her companions were trying to steal goods from a warehouse. The Artificial Intelligence (AI)-enabled creature chases Bella throughout the episode, as she successfully evades the Metalhead… mostly.
The episode ends abruptly. It leaves a narrative vacuum, one the viewer must fill with their own ruminations over the future of AI.
Somewhat serendipitously, a few days later, I stumbled upon Yuval Noah Harari, the author of the modern cult classic Sapiens, in a podcast. Harari posits that for thousands of years human beings have used superior intelligence as a reason for our exploitative and cruel treatment of all other animals on this planet. But with the dawn of artificial intelligence that fact will cease to be true.
Human beings will still have the ability to “feel” and possess a conscience but we shall no longer be the most intelligent beings on earth. In mere nanoseconds, AI algorithms will be able to process thousands of pages of contracts and find loopholes in them, or detect every disease in any part of the world.
The argument for being cruel to other creatures who are less intelligent to us, Harari said, will come back to bite us in the buttocks. A more intelligent autonomous machine like the “Metalhead” dog in the Black Mirror episode might treat us in the same vicious and pitiless manner, because that is what its algorithms would have learnt from mapping human behaviour towards less intelligent beings.
Harari raises some important questions regarding this issue in his new book, Homo Deus. This is an excerpt from its second chapter:
“With the help of vaccinations, medications, hormones, pesticides, central air-conditioning systems and automatic feeders, it is now possible to pack tens of thousands of pigs, cows or chickens into neat rows of cramped cages, and produce meat, milk and eggs with unprecedented efficiency. In recent years, as people began to rethink human–animal relations, such practices have come under increasing criticism. We are suddenly showing unprecedented interest in the fate of so-called lower life forms, perhaps because we are about to become one. If and when computer programs attain superhuman intelligence and unprecedented power, should we begin valuing these programs more than we value humans? Would it be okay, for example, for an artificial intelligence to exploit humans and even kill them to further its own needs and desires? If it should never be allowed to do that, despite its superior intelligence and power, why is it ethical for humans to exploit and kill pigs? Do humans have some magical spark, in addition to higher intelligence and greater power, which distinguishes them from pigs, chickens, chimpanzees and computer programs alike? If yes, where did that spark come from, and why are we certain that an AI could.”
Fears about a technology singularity though are not new. Ray Kurzweil, a well-known futurist, claims that 2029 is the year when an AI will pass a valid Turing test and therefore achieve human levels of intelligence.
“I have set the date 2045 for the ‘singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created,” according to a statement he made to futurism.com. Many of the big daddies of science and technology—like Stephen Hawking, Elon Musk, and even Bill Gates, have warned about such a future.
Even before Black Mirror, there were movies and books aplenty that alluded to such risks. The most seemingly accurate of all, I believe, has been the robot that featured in Stanley Kubrick’s 2001: A Space Odyssey, named HAL. The robot’s villainous instincts, as shown in the film, can be attributed to HAL just following its programming.
In the Colossus: The Forbin Project (1970), on the other hand, the protagonists actually do not mind being governed by a sentient machine. A completely rational computer with superior intelligence, they believe, might be able to create a more fair society for everyone, unlike humans who are fallible to greed and self-interest.
At the film’s conclusion, the AI profoundly quips, “You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride.”
Another popular fictional reference to a robot-dominated future was Philip K. Dicks’ 1972 novel: Do Androids Dream of Electric Sheep. In this he postulates that what really makes a human unique is his ability to comprehend and identify with another creature’s feelings. In other our capacity for empathy.
Robots, according to Dick, are “psychopaths”, and are inhuman because they fail to show empathy towards other beings. But do we really possess these “human” qualities if we butcher these supposed lower-life forms? What then, does make us “human”?
Despite the fact that in most parts of the world one can now lead a healthy lifestyle as a vegetarian, survival can be a justification for consuming animal meat and using animal-based products.
Then if AI sees the potential danger of a clampdown from human beings due to our fear of a singularity, will it too then start murdering humans for its survival?
Perhaps Harari is wrong and we can potentially develop enough mechanisms to safeguard ourselves from the scenario that the episode depicts. But as British Science fiction writer Arthur C. Clarke, in the second of his three laws, states, “The only way of discovering the limits of the possible is to venture a little way past them into the impossible.”
Even for someone like me, who ardently enjoys his butter chicken and barbecued pork spare ribs, Harari’s thesis coupled with the Black Mirror episode do bring to light a different dimension to the outrage against animal cruelty and how we treat other animals.
Joseph Weizenbaum, in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, warns that AI technology must not replace professions that require care, respect and empathy such as a therapist or a police officer.
There are already chatbots on the internet that serve as pseudo-therapists, and a scenario where AI takes over the latter does not seem too far fetched as can be seen the Black Mirror episode.
The late management pundit Peter Drucker has inspired many a B-School graduates by his famous saying, “The best way to predict the future is to create it.”
In an AI driven future, Drucker’s statement couldn’t sound more ironic even if it tried.
Archit Puri is a public policy researcher.