A couple of weeks ago, I watched a video that had been doing the rounds on WhatsApp. It featured a miniature drone imbued with facial recognition and artificial intelligence technologies that could accurately identify a human target and fly close enough so that when it detonated, the precisely shaped plastic explosive payload that it was carrying took out the target with no collateral damage. When used in concert with other autonomous drones, they can coordinate attacks, breach buildings and reach places where human soldiers couldn’t hope to enter. This is the new face of warfare.

All of this is scary because the small size and autonomous intelligence of these weapons make it hard to develop effective counter-measures against an attack. It’s easy to see how this might appeal to armies already under pressure to reduce combat fatalities and remove humans from direct action. Towards the end of the video, you realize this is a fictionalized representation but it seems so plausible because every one of the technologies used in the video already exists.

For over a decade now, militaries of the world have been moving towards using weapons that are designed to keep humans further and further away from the action. This is why we’ve seen an increased involvement of drones in battle. We’ve slowly grown accustomed to the reality of this new form of warfare, somewhat re-assured (as far as one can be under the circumstances) that there will always be humans-in-the-loop to take the actual “kill" decision. So far, there seems to be broad consensus that there is a need to ensure that appropriate human judgment is applied to the determination of whether, in fact, the target is a combatant or a civilian, and that the morality of a life-and-death decision should not be left to dispassionate machines. 

But there are risks with having drones remotely piloted by humans. For one, the communications link between the drone and its pilot could be jammed or hacked, either neutralizing it or, even worse, allowing the enemy to take control of it. There is also a concern that where split-second decisions need to be taken in battle conditions, drone pilots who are far away from the action will not be able to react in the available time. Severing the link with the pilot and having the intelligence on-board allows for quicker and more decisive action, making the demand for these technologies hard to resist. 

We are fast reaching the point where drone technology will have fully autonomous capabilities. There seems to be little to stop the militaries of the world from using facial recognition and artificial intelligence technologies that have already proved that they work in social media applications. Very soon, it will be possible for us to deploy autonomous weapons that do not require humans-in-the-loop. 

Once we cross that moral line, we will dramatically lower our reluctance to engage in armed conflict. By delegating the act of taking a life to an autonomous weapon, it will become that much easier to deploy surgical strikes as we will have to face none of the moral anxiety that comes with pulling the trigger. It will also allow us far greater flexibility in battle tactics—the small form factor and localized destructive capabilities of these technologies allowing us to engage differently and with minimal collateral damage. 

Every country that looks to deploy this technology will be convinced that it has devised the appropriate moral frameworks to only use this technology in an ethical way. Even if that is true—and I have no confidence, based on present form, that any country can truly provide such an assurance—how do we account for the hundreds of dictators and failed nation states into whose hands this technology will inevitably fall, and who will have no moral qualms about using it? 

If this sounds familiar, it is because we’ve been here before. In the past, the nations of the world have come together to put in place a ban on biological weapons. As a result, there have been next to no examples of biological warfare in the world. We need come to a similar agreement on fully autonomous weapon technologies to declare them illegal and against the international order. We need to do so urgently as all it takes is for one nation to develop this technology and every other nation will have no option but to build up their own defensive stockpiles. One rogue nation is all we need to set off an arms race we can ill afford.

It may be inevitable that some day in the future, wars will be fought between robots, Terminator style. But in the interregnum, and for so long as these technologies are used on human targets, we’d be far better off banning the entire category of weapon systems.

Rahul Matthan is a partner at Trilegal. Ex Machina is a column on technology, law and everything in between.

His Twitter handle is @matthan.

Close