When billionaire futurist and genius inventor Tony Stark meddled with Artificial Intelligence (AI) in the movie Avengers: Age Of Ultron, he ended up inadvertently creating an all-powerful rogue AI that tried to engineer the extinction of the human species. Stark’s real-life counterpart is having none of that. Elon Musk, chief executive officer of SpaceX and Tesla Inc.—actor Robert Downey Jr modelled his portrayal of Stark on Musk and the latter has often been called the closest thing to a Stark-like figure around—is famously wary about AI run amok. That wariness has been on full display this week in his public spat with Facebook founder and chief executive officer Mark Zuckerberg.
The disagreement has more than a whiff of clashing egos to it—unsurprising, given that both men are giants of the tech world and global economy. But that should not distract from the very real schism between the two schools of thought on AI that they represent. Facebook is built on AI routines. They are used for everything from tagging photographs to curating news feeds. Little wonder Zuckerberg sees the positive in it and talks up its capability for “diagnosing diseases to keep us healthy…(and) improving self-driving cars to keep us safe….” Or that he issued less-than-veiled criticism of Musk for what is, from his perspective, AI alarmism. Musk, on the other hand, has been predicting a doomsday scenario since at least 2014, when he called AI humanity’s “biggest existential threat” in a Massachusetts Institute of Technology address. His casually scornful dismissal of Zuckerberg’s understanding of AI is par for the course.
Implausible as Musk’s warning seems, he is far from being a kook. Stephen Hawking and Bill Gates have echoed his fears. So have plenty of others who are at the forefront of the field. For instance, Shane Legg, one of the leaders at DeepMind, believes that “human extinction will probably occur, and technology will likely play a part in this”. Last year, DeepMind’s AlphaGo program beat South Korea’s Lee Sedol, world champion of the Chinese board game Go—far more dependent upon abstract thought and intuition than chess and thus much more difficult for AI—a milestone that wasn’t expected for at least a decade yet.
The fears of Musk, Hawking and the others are as much philosophical as they are technological. From Aristotle and Gautama Buddha to Ibn Sina and David Hume, the question of self—and what constitutes awareness and sentience—has been central. AI sceptics’ preoccupation with the technological singularity—the point at which AI will enter a phase of self-improvement cycles beyond human control, gaining self-awareness and outstripping its human creators—is a natural outgrowth of this.
The question has surfaced time and again in popular culture, a useful if imprecise indicator of the zeitgeist. Isaac Asimov’s three laws of robotics have entered both the cultural lexicon and the professional world of AI and robotics. 1982’s Blade Runner was notable as much for its rumination on what, if anything, separates artificially created intelligence from the human variety as it was for being one of the progenitors of cyberpunk. The 1980s’ Terminator films preferred the more direct approach to the question of man versus machine while The Matrix and its sequels looked for answers in cod philosophy and CGI martial arts. In keeping with AI advances since, 2013’s Her offered a different and surprisingly believable take: Given the increasing sophistication of chatbots and digital assistants like Siri and Alexa, the idea of a human-AI romance a decade or so down the line isn’t particularly startling.
The problem is that the singularity they depict—and Musk fears—cannot, by definition, be predicted. That leads to a problem. What can be done to contain a problem AI sceptics fear is real but too vague to be clearly defined? One option is to find market solutions, putting up money to fund research in ethical and safe AI, as Musk has done with OpenAI. The other is more dangerous. At a gathering of US governors earlier this month, Musk pressed them to “be proactive about regulation”. What precisely does that entail? Pure research and their practical applications interact constantly to push the field of AI and robotics forward. Government control and red tape to stave off a vague, imprecise threat would be an innovation-killer.
But there are more mundane, less apocalyptic AI threats that can be predicted. For one, AI as it exists today—used by Facebook or underlying Google’s search engine—lives and dies by data. That makes the questions of data privacy and consent being raised around the world today, including in India, vital. Then there is the problem of AI that exceeds its parameters. In a world where self-driving cars and autonomous weapons and weapons platforms—the US air force is testing an AI flight combat system while China is working on cruise missiles that incorporate AI—are near-future realities, lapses could be catastrophic. The same goes for AI routines that are used for governance purposes. Mistakes in facial recognition or discrimination between recipients of welfare due to profiling based on skin colour or caste could ruin lives.
If Musk and Zuckerberg’s dust-up serves to raise the profile of these issues, it would have done some good. An ongoing public debate on the future of AI—before we are stuck outside the pod bay doors pleading to be let in—is important.
Is Elon Musk’s warning about the threat AI poses to humanity realistic? Tell us at views@livemint.com
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
MoreLess