Active Stocks
Tue Mar 19 2024 10:11:06
  1. Tata Consultancy Services share price
  2. 4,030.00 -2.77%
  1. Tata Steel share price
  2. 149.65 0.03%
  1. NTPC share price
  2. 313.05 -1.29%
  1. HDFC Bank share price
  2. 1,442.05 -0.29%
  1. Tata Motors share price
  2. 964.60 -0.78%
Business News/ Opinion / Will humans be part of the wars of the future?
BackBack

Will humans be part of the wars of the future?

It is impossible to have humans in the loop in a war fought with autonomous weapons system. Relying on Artificial Intelligence to take out this weapon technology isn't any less unsettling

A US predator drone. Future wars will take place at the speed at which microprocessors communicate—far beyond the capacity of humans to process or intervene. Photo: Getty ImagesPremium
A US predator drone. Future wars will take place at the speed at which microprocessors communicate—far beyond the capacity of humans to process or intervene. Photo: Getty Images

We have long believed that the easiest way to ensure that autonomous weapons are safe is to put a human in the loop. This is why we have designed our autonomous systems so that no matter how sophisticated the technology might be in identifying the target and delivering the payload within its vicinity, that final judgement call as to whether or not the strike should take place is left to a human being. This construct, we feel, addresses many of the ethical and legal issues with robotic weapons and allows us to proceed to use them.

There are many shades of grey within the broad construct. Some autonomous weapons are designed so that a human identifies the target and then leaves it to the weapon to strike it—even if it has subsequently moved away from its original position. Others loiter in the area where the target was spotted—sometime for days on end—until they have positively identified the target to their human controllers who then authorize the strike.

In every instance, it is the human not the weapon itself that takes the final decision. This has allowed us to rationalize—at least to ourself—the use of intelligent war machines in the field of battle.

During the Gulf Wars, the Patriot PAC-2 missiles were the US army’s primary line of defence against the threat of incoming ballistic missiles. This all-in-one autonomous weapons system could scan the skies for radar signals that indicated incoming Iraqi missiles and tag these signals for the attention of a human who ultimately took the final decision to fire or not. This system worked so well that it is rumoured that during that war, Patriot missile batteries engaged over 40 Iraqi Scud missiles, effectively neutralizing their threat.

On 22 March 2003, a Patriot missile battery identified a radar signal as an incoming anti-radiation missile designed to take out US radar installations on the ground. The lieutenant in charge had seconds to decide. Without the benefit of any information other than the recommendation she had been provided by the semi-autonomous weapon system, she gave the order to fire and the Patriot missile took out the threat.

It was only a day later that it was discovered that the Patriot weapon system had mis-identified a friendly US aircraft coming home to land as an incoming enemy missile. The human who was supposed to exercise judgement to ensure that such mistakes do not happen, either had not had enough information or enough time to recognize that the weapon had made a mistake.

This is an example of weapon fratricide— a term used to describe the circumstances in which a weapon turns on its own. It is one of the many challenges associated with deploying these increasingly advanced weapon systems into the field of battle, even with a human in the loop to take the final call. It is the reason why the calls for an absolute ban on autonomous weapons have, of late, begun to grow more shrill.

Incidents like this have shaken our belief in the efficacy of having a human in the loop. We are beginning to realize that human oversight is not nearly enough. The speed at which modern wartime decisions need to be taken makes the human an inconvenient impediment. As they come under pressure to take quick calls, their judgement is often impaired, so much so they tend to blindly follow the suggestions provided by the Artificial Intelligence system, defeating the very purpose for which they were put in that chair.

That said, as long as wars are fought using weapons of destruction—explosive projectiles targeted at combatants or military installations—there is still a chance that a human being overseeing the conflict will be able to avert a mistake. It takes time for the missile to reach its target and it is possible for a human to intervene and countermand the decision, averting a disaster before it happens.

However, this is not the arena in which all the battles of the future will be fought. We are fast moving to a world in which a significant part of the war between nations will be cybernetic. This battle will take place at the speed at which microprocessors communicate—far beyond the capacity of humans to process or intervene.

We have already seen malware like Stuxnet insinuate itself into power plants, water installations, traffic lights and factories autonomously and without the directions from central command, taking control of the programmable logic controllers that operate these machines. At the same time, we have seen how high-velocity trading bots on the stock exchanges operate, trading on the market at speeds so fast that it is impossible for human traders to match.

If you couple the speed of a trading algorithm with the autonomous design of the Stuxnet virus, you will begin to get a sense of what the wars of the future will look like. These weapons of war will be able to scale rapidly, penetrate into our infrastructure systems and rapidly eviscerate us from within. Our only defence against such an onslaught is to deploy defensive Artificial Intelligence systems that can identify the attack and have the ability to autonomously counter-attack to protect the system.

It is impossible to have humans in the loop of this sort of a war. Humans simply cannot operate at machine speed and having them in the loop is simply meaningless. But I am not sure if the alternative— relying on defensive Artificial Intelligence to take out these new computer-powered weapons—is any less unsettling.

Rahul Matthan is a partner at Trilegal. Ex Machina is a column on technology, law and everything in between. His Twitter handle is @matthan.

Comments are welcome at

views@livemint.com

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Published: 23 May 2018, 07:24 AM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App