Photo: AFP
Photo: AFP

Will artificial intelligence increase the risk of a nuclear war?

The hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take apocalyptic risks, says a paper by RAND Center for Global Risk and Security

Mumbai: Artificial intelligence (AI) has the potential to upend the foundations of nuclear deterrence by the year 2040, according to a new paper by the RAND Center for Global Risk and Security, part of the RAND Corp, a policy research organisation.

While AI-controlled doomsday machines are considered unlikely, according to the paper, the hazards of artificial intelligence for nuclear security lie instead in its potential to encourage humans to take apocalyptic risks.

The RAND publication says that in the coming decades, AI has the potential to erode the condition of mutually assured destruction and undermine strategic stability. Improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed. Nations may be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals even if they have no intention of carrying out an attack, the authors of the report say.

“The connection between nuclear war and artificial intelligence is not new, in fact the two have an intertwined history," said Edward Geist, co-author on the paper and associate policy researcher at the RAND Corp. He said one example of such work was the Survivable Adaptive Planning Experiment in the 1980s that sought to use AI to translate reconnaissance data into nuclear targeting plans.

“Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes," said Andrew Lohn, co-author on the paper and associate engineer at RAND in a press release on 24 April. “There may be pressure to use AI before it technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk."

For instance, it was only on 4 April that over 3,100 Google employees signed a petition opposing the company’s part in a Pentagon AI program. “The letter asked CEO Sundar Pichai to pull Google out of the project, which harnesses artificial intelligence to analyze video and could improve drone targeting," according to media reports .

The US Department of Defence’s Algorithmic Warfare Cross-Functional Team (AWCFT), or Project Maven, focuses on computer vision to autonomously extract objects from moving or still imagery. It also uses artificial neural networks to detect patterns from mountains of data.

Fortunately for us, whether we love AI or fear it, most of the AI we see around caters to specific areas, which is why it is categorised as “weak AI". Examples include most AI chatbots, AI personal assistants, smart speakers and smart home assistants such as Apple’s Siri, Microsoft’s Cortana, Google’s Allo, Amazon’s Alexa or Echo and Google’s Home. Driverless cars and trucks, however, impressive they sound, remain higher manifestations of “weak AI".

In other words, ‘weak AI’ lacks human consciousness. Moreover, though we talk about the use of artificial neural networks (ANNs) in deep learning—a subset of machine learning—ANNs do not behave like the human brain. They are loosely modelled on the human brain.

India’s own report on AI, released this March by the union ministry of commerce that constituted a task force for the same, identifies 10 specific domains for rapid AI incorporation: manufacturing, fin-tech, health, agriculture, technology for the differently-abled, national security, environment, public utility services, retail and education.

However, researchers caution that “weak AI" can become “strong AI", and machines with “strong AI" could have a brain as powerful as the human brain. Such machines will be able to teach themselves, learn from others, perceive, emote—in other words, do everything that human beings do —and more. It’s the “more" aspect that we fear most.

Strong AI—also called true intelligence or artificial general intelligence (AGI) —may still a far way off but cautionary notes are in order to keep the misuse of AI in check.

According to RAND Corporation, there could be two likely scenarios. One where an increased reliance on AI could lead using it before it is technologically mature. On the other hand, if the nuclear powers manage to establish a form of strategic stability compatible with the emerging capabilities that AI might provide, the machines could reduce distrust and alleviate international tensions, thereby decreasing the risk of nuclear war, the authors say.

They conclude, “...at present, we cannot predict which—if any—of these scenarios will come to pass, but we need to begin considering the potential impact of AI on nuclear security before these challenges become acute".

Close