Opinion | The world must act against the threat of autonomous weapons4 min read . Updated: 06 Feb 2020, 10:26 PM IST
We need a strategic framework that assigns responsibility for the devastation that armaments loaded with AI could wreak
Whether we like it or not, we are fast approaching a pivotal moment in the global arms race for Lethal Autonomous Weapons (LAWs). One can barely go a day without hearing of how new technologies such as artificial intelligence (AI) and machine learning will disrupt the way things are done. Yet, the conversation often overlooks its most palpable threat: the fact that major countries and technology companies all over the world are developing LAWs capable of acquiring, identifying and engaging targets without any meaningful human control.
While these may sound like distant threats on an apocalyptic horizon, LAWs are already a part of the arsenal of several countries. Israel Aerospace Industries’ Harpy, for instance hovers high in the sky surveying the land, and when an enemy radar signal is detected, it crashes into the source of that signal, destroying the target and itself. India and several other countries have already purchased and deployed this weapon. Demands have risen for a comprehensive international treaty to pre-emptively ban the development of AI and other technologies in the field of LAWs, but no country wants to be left behind. This race has been fanned by major technology companies, such as Amazon, Microsoft and Google, which have gone head-to-head to compete for lucrative defence contracts. The proliferation of such weapons could have widespread ramifications on the way warfare is conducted and the future of our society.
As long as mankind has fought, decisions of life and death on the battlefield, for better or worse, have been left in the hands of fellow humans. With LAWs, we jump off a moral precipice: autonomous bodies decide who lives and dies. These weapons make war beholden to objective standards, whereas the causes of warfare are naturally driven by subjective issues, such as nationalist sentiment, political strategy and human disagreements. Algorithms used in LAWs internalize prejudices, but do not account for human suffering and, therefore, could cause extensive violence. This leads to the next visible challenge of this technology: fixing accountability. Who should be held responsible for the unintended actions of LAWs? Should it be its developer? Likely, it was unaware of the context in which the weapon might be deployed. Or, perhaps, the manufacturer that created a product and then recused itself of its potential uses by the purchaser? How about the state or military that put such a weapon to use? Could it be the machine itself? Human accountability for the use of force must remain at the heart of any solution.
All these issues do not even take into account the question of what might happen if this technology falls into the wrong hands. Incidents all over the world have shown that even the most advanced security systems are susceptible to hacking. Terrorists or rogue states could use such weapons on civilians. In the future, possessors of private information, whether terror groups or the state, could target specific individuals and undertake surgical strikes through named bullets with the help of real-time tracking and facial recognition technology. This is not as dystopian as it might sound. In time, the economic feasibility of such weapons will lower the cost barriers to war. Conflict will be reduced to a game where civilian and personnel casualties are reduced to mere statistics on a screen.
Globally, various interest groups are worried about the possible degradation of international humanitarian laws and existing legal and ethical principles. The Campaign to Stop Killer Robots, a coalition of non-governmental organizations, has called for a pre-emptive ban on fully autonomous weapons, and counts among its ranks 30 countries, 21 Nobel laureates, and thousands of AI scientists. Prominent technocrats such as Elon Musk and Larry Page have signed open letters warning of the threats that such technology could pose. Yet, these calls have fallen on deaf ears. To date, there are no international standards or legal documents to regulate their use. Neither the Laws and Customs of War on Land, nor the Geneva Convention defines which AI systems can be used in combat and which cannot. While the US, China, Russia and Israel have stated that developing fully autonomous weapons is not their goal, the reality is that such systems are likely to be created.
India’s non-participation in the proposed treaty and its ready investment in LAWs stems from the concern that enemy states could develop or acquire them. While India is not bound by any international obligations, the broad definitions of “arms", “fire arms" and “prohibited arms" under the Arm’s Act, 1959, would bring LAWs under our existing legal framework and within the ambit of the Industries (Development & Regulation) Act, 1951, and the Arms Act of 1959. This means that the development of automated weapons would require elaborate processes, for which manufacturers would need licences and security clearances.
The calls so far have focused on a pre-emptive ban. The technology industry in India and globally must come together to regulate itself and stop the development of autonomous weapons. Furthermore, governments must develop a strategic framework for AI to fix accountability for the actions of these systems, and create checks and balances that mitigate the likelihood of any catastrophic outcome. All this will require new forms of rights, new modes of governance, and new systems of liability. The consensus must be to retain meaningful human control over decision-making in warfare. If we fail to do so, the defining treatise on warfare in the future won’t be Sun Tzu’s Art Of War or Chanakya’s Arthashastra. Instead, it might just be a few lines of code.
Nishith Desai & Aryadita Balakrishnan are, respectively, founder, Nishith Desai Associates, and core member, Nishith Desai Associates’ Strategic Legal Consulting Practice