Home >Opinion >Online-views >Opinion | Before AI achieves singularity, it will be a horror
Autonomous weapon system may not only involve killer military drone but also microscopic robots that can crawl into human blood vessels. Photo: Getty Images
Autonomous weapon system may not only involve killer military drone but also microscopic robots that can crawl into human blood vessels. Photo: Getty Images

Opinion | Before AI achieves singularity, it will be a horror

Automated weaponry is not new, but there was always a human to pull the trigger. That will not be the case when AI-powered automated weapon systems take over military warfare

The redefinition of jobs as we know them is what troubles us most about recent advances in Artificial Intelligence (AI). Fully autonomous cars are expected, by some accounts, to arrive by 2020. AI-based novels may be here by 2030, and AI surgery by 2050, depending on whom you believe. We fear “singularity", the point at which AI crosses over and is independently capable of re-engineering and reinventing itself without human assistance—and is smarter than what any human trying to control it could be.

Jobs, or singularity, are far from what should be worrying us the most about the advances in AI. To my mind, the AI-based development of autonomous weaponry, which to some extent is already happening, is far more frightening. Autonomous weapons can decide for themselves when to take human lives. According to the The Independent newspaper, experts and senior military officials have said that the use of “killer robots" will be widespread in warfare in a matter of years with the global spending on robotics set to touch $188 billion in 2020.

That paper was reporting on a meeting of the United Nation’s Convention on Certain Conventional Weapons (CCW), that took place in early September in Geneva. Its reportage said that a majority of countries had proposed to begin negotiations on a new treaty to prevent the development and use of fully autonomous conventional weapons that can act without human oversight.

However, a group of advanced military powers that included Australia, Israel, Russia, South Korea and the US were among countries that prevented talks during the CCW meeting that could have led to an international treaty to ban fully autonomous weapon systems. Nation states have yet to even agree on a shared definition of what a lethal autonomous weapon system is with those that most want to develop the weapons using this lack of agreement as a pretext to limit further progress in discussions, according to campaigners who want to stop such research.

Automated weaponry is not new. In 1884, Hiram Maxim invented the first machine gun, and in the last century, man’s foray into the skies was soon followed by sky-based warfare. After a while, sky-based warfare became more automated —from intercontinental ballistic missiles that are launched and controlled from afar, and can cause widespread destruction, to the pinpoint accuracy of killer military drones that are piloted from thousands of miles away and can be aimed to kill just a single man. But these weapons are not autonomous, since they all still have a human being at the trigger end. It is that human who must take the final decision to snuff out one—or many—lives.

Despite patriotism and the drilled-in willingness to follow orders, human beings who decide when to pull a trigger or drop a bomb can pay a high psychological price. Remorse and post-traumatic stress disorder (PTSD) often dog those who have been in combat situations. Charles Sweeney, who dropped the second atom bomb on Nagasaki, said before he died: “As the man who commanded the last atomic mission, I pray that I retain that singular distinction." This is quite obviously not so of a machine that makes such a decision, and by extension, the scientists and researchers who build them. I doubt Hiram Maxim faced PTSD, though many who actually pulled the trigger on his machine guns most certainly did.

China has now begun to recruit and train youngsters for its autonomous weapons programme. On 8 November, the South China Morning Post reported that Beijing Institute of Technology (BIT) had announced on its website that it had selected 31 students, all under the age of 18, to begin training as the world’s youngest AI weapons scientists. More than 5,000 high school students had applied for the programme. The 31 were not selected solely for their intellectual aptitude; they were also screened for other qualities such as creative thinking, a willingness to fight, persistence when facing challenges, “passion" for developing new weapons, and patriotism, according to a BIT professor. Other countries will follow suit.

These are not just autonomous conventional weapons but could also be microscopic robots that can crawl into human blood vessels. When used in weaponry, AI will be a horror long before it achieves singularity.

Siddharth Pai is founder of Siana Capital, a venture fund management company focused on deep science and tech in India.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Click here to read the Mint ePapermint is now on Telegram. Join mint channel in your Telegram and stay updated

My Reads Redeem a Gift Card Logout