Preparing for the AI apocalypse

The spat between Elon Musk and Mark Zuckerberg points to important questions about the future of AI

Livemint
Updated28 Jul 2017, 03:40 AM IST
As dependence on AI grows, lapses could be catastrophic. Illustration: Jayachandran/Mint
As dependence on AI grows, lapses could be catastrophic. Illustration: Jayachandran/Mint

When billionaire futurist and genius inventor Tony Stark meddled with Artificial Intelligence (AI) in the movie Avengers: Age Of Ultron, he ended up inadvertently creating an all-powerful rogue AI that tried to engineer the extinction of the human species. Stark’s real-life counterpart is having none of that. Elon Musk, chief executive officer of SpaceX and Tesla Inc.—actor Robert Downey Jr modelled his portrayal of Stark on Musk and the latter has often been called the closest thing to a Stark-like figure around—is famously wary about AI run amok. That wariness has been on full display this week in his public spat with Facebook founder and chief executive officer Mark Zuckerberg.

The disagreement has more than a whiff of clashing egos to it—unsurprising, given that both men are giants of the tech world and global economy. But that should not distract from the very real schism between the two schools of thought on AI that they represent. Facebook is built on AI routines. They are used for everything from tagging photographs to curating news feeds. Little wonder Zuckerberg sees the positive in it and talks up its capability for “diagnosing diseases to keep us healthy…(and) improving self-driving cars to keep us safe….” Or that he issued less-than-veiled criticism of Musk for what is, from his perspective, AI alarmism. Musk, on the other hand, has been predicting a doomsday scenario since at least 2014, when he called AI humanity’s “biggest existential threat” in a Massachusetts Institute of Technology address. His casually scornful dismissal of Zuckerberg’s understanding of AI is par for the course.

Implausible as Musk’s warning seems, he is far from being a kook. Stephen Hawking and Bill Gates have echoed his fears. So have plenty of others who are at the forefront of the field. For instance, Shane Legg, one of the leaders at DeepMind, believes that “human extinction will probably occur, and technology will likely play a part in this”. Last year, DeepMind’s AlphaGo program beat South Korea’s Lee Sedol, world champion of the Chinese board game Go—far more dependent upon abstract thought and intuition than chess and thus much more difficult for AI—a milestone that wasn’t expected for at least a decade yet.

The fears of Musk, Hawking and the others are as much philosophical as they are technological. From Aristotle and Gautama Buddha to Ibn Sina and David Hume, the question of self—and what constitutes awareness and sentience—has been central. AI sceptics’ preoccupation with the technological singularity—the point at which AI will enter a phase of self-improvement cycles beyond human control, gaining self-awareness and outstripping its human creators—is a natural outgrowth of this.

The question has surfaced time and again in popular culture, a useful if imprecise indicator of the zeitgeist. Isaac Asimov’s three laws of robotics have entered both the cultural lexicon and the professional world of AI and robotics. 1982’s Blade Runner was notable as much for its rumination on what, if anything, separates artificially created intelligence from the human variety as it was for being one of the progenitors of cyberpunk. The 1980s’ Terminator films preferred the more direct approach to the question of man versus machine while The Matrix and its sequels looked for answers in cod philosophy and CGI martial arts. In keeping with AI advances since, 2013’s Her offered a different and surprisingly believable take: Given the increasing sophistication of chatbots and digital assistants like Siri and Alexa, the idea of a human-AI romance a decade or so down the line isn’t particularly startling.

The problem is that the singularity they depict—and Musk fears—cannot, by definition, be predicted. That leads to a problem. What can be done to contain a problem AI sceptics fear is real but too vague to be clearly defined? One option is to find market solutions, putting up money to fund research in ethical and safe AI, as Musk has done with OpenAI. The other is more dangerous. At a gathering of US governors earlier this month, Musk pressed them to “be proactive about regulation”. What precisely does that entail? Pure research and their practical applications interact constantly to push the field of AI and robotics forward. Government control and red tape to stave off a vague, imprecise threat would be an innovation-killer.

But there are more mundane, less apocalyptic AI threats that can be predicted. For one, AI as it exists today—used by Facebook or underlying Google’s search engine—lives and dies by data. That makes the questions of data privacy and consent being raised around the world today, including in India, vital. Then there is the problem of AI that exceeds its parameters. In a world where self-driving cars and autonomous weapons and weapons platforms—the US air force is testing an AI flight combat system while China is working on cruise missiles that incorporate AI—are near-future realities, lapses could be catastrophic. The same goes for AI routines that are used for governance purposes. Mistakes in facial recognition or discrimination between recipients of welfare due to profiling based on skin colour or caste could ruin lives.

If Musk and Zuckerberg’s dust-up serves to raise the profile of these issues, it would have done some good. An ongoing public debate on the future of AI—before we are stuck outside the pod bay doors pleading to be let in—is important.

Is Elon Musk’s warning about the threat AI poses to humanity realistic? Tell us at views@livemint.com

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

MoreLess
First Published:28 Jul 2017, 03:40 AM IST
Business NewsOpinionPreparing for the AI apocalypse

Get Instant Loan up to ₹10 Lakh!

  • Employment Type

    Most Active Stocks

    Tata Motors share price

    827.25
    09:54 AM | 11 NOV 2024
    21.55 (2.67%)

    Tata Steel share price

    146.05
    09:54 AM | 11 NOV 2024
    -1.5 (-1.02%)

    Ashok Leyland share price

    228.55
    09:54 AM | 11 NOV 2024
    6.55 (2.95%)

    Indian Oil Corporation share price

    138.70
    09:54 AM | 11 NOV 2024
    -1.7 (-1.21%)
    More Active Stocks

    Market Snapshot

    • Top Gainers
    • Top Losers
    • 52 Week High

    Krishna Instit Of Medical Scienc share price

    584.05
    09:36 AM | 11 NOV 2024
    13.85 (2.43%)

    Indian Hotels Company share price

    747.60
    09:36 AM | 11 NOV 2024
    14.55 (1.98%)
    More from 52 Week High

    Hitachi Energy India share price

    12,811.10
    09:36 AM | 11 NOV 2024
    -1207.4 (-8.61%)

    Asian Paints share price

    2,535.00
    09:36 AM | 11 NOV 2024
    -234.25 (-8.46%)

    CE Info Systems share price

    1,902.90
    09:36 AM | 11 NOV 2024
    -152.9 (-7.44%)

    Aarti Industries share price

    442.30
    09:36 AM | 11 NOV 2024
    -32.45 (-6.84%)
    More from Top Losers

    ITI share price

    328.40
    09:36 AM | 11 NOV 2024
    24.8 (8.17%)

    EIH share price

    382.00
    09:36 AM | 11 NOV 2024
    23.05 (6.42%)

    Power Finance Corp share price

    470.55
    09:36 AM | 11 NOV 2024
    21.1 (4.69%)

    JSW Energy share price

    745.30
    09:36 AM | 11 NOV 2024
    33.2 (4.66%)
    More from Top Gainers

    Recommended For You

      More Recommendations

      Gold Prices

      • 24K
      • 22K
      Bangalore
      79,375.000.00
      Chennai
      79,381.000.00
      Delhi
      79,533.000.00
      Kolkata
      79,385.000.00

      Fuel Price

      • Petrol
      • Diesel
      Bangalore
      102.92/L0.00
      Chennai
      100.80/L0.00
      Kolkata
      104.95/L0.00
      New Delhi
      94.77/L0.00

      Popular in Opinion

        HomeMarketsPremiumInstant LoanMint Shorts