Active Stocks
Thu Mar 28 2024 15:59:33
  1. Tata Steel share price
  2. 155.90 2.00%
  1. ICICI Bank share price
  2. 1,095.75 1.08%
  1. HDFC Bank share price
  2. 1,448.20 0.52%
  1. ITC share price
  2. 428.55 0.13%
  1. Power Grid Corporation Of India share price
  2. 277.05 2.21%
Business News/ Opinion / Online-views/  Should we apply the brakes on artificial intelligence research?
BackBack

Should we apply the brakes on artificial intelligence research?

Should AI research be on the same pedestal as research into the cloning of humans, with which, by the by, it shares many ethical characteristics?

New threats come about as new areas, which are unbounded by human capabilities, are conquered by artificial intelligence. Photo: iStockphotoPremium
New threats come about as new areas, which are unbounded by human capabilities, are conquered by artificial intelligence. Photo: iStockphoto

Some months ago, I wrote in this column that a team of computer scientists at three different universities recently published a paper titled Stealing Machine Learning Models via Prediction APIs. APIs or application programming interfaces are built into a computer application to allow programmers and other computer applications to access it. What these researchers have found is a way to create their own artificially intelligent interface into a Black Box and then use the output from the box to reconstruct its internal workings, thereby reverse-engineering the box.

Apart from being able to reconstruct pay-per-query machine- learning engines, they can in some cases also recreate the private data that an AI (artificial intelligence) engine has been trained with. And in an even more sinister revelation, other researchers have shown that they can even learn how to trick the original Black Box. According to Google researcher Alexey Kurakin, one can slightly alter images fed to image recognition Black Boxes so that the machine learning/neural networks see something that isn’t there. Evidently, just by altering a few pixels in an image, which are imperceptible to an ordinarily intelligent human eye, an AI program can be fooled into thinking that an elephant is actually a car!

On 21 February 2018, a news report by the British Broadcasting Corp. revealed that leading AI experts have said that the threat of AI being exploited is real, and that it needs to be countered immediately by effective action both at the level of technologists who are building these new systems, and by governments who need to consider new laws. The news bulletin cited a study carried out by 26 reputable researchers, from 14 different institutions, called the Malicious Use of Artificial Intelligence and quoted from its report, which is available online at maliciousaireport.com. Shahar Avin from Cambridge University’s Centre for the Study of Existential Risk, one of the lead authors of the report, says, “This time it’s different, you can no longer close your eyes."

The report makes the case that AI is a dual-use technology which can be put to both beneficial and harmful ends, depending on the original intent of the programmer or the hacker. It also says that AI is commonly both efficient and scalable, meaning any single type of use can explode very easily. It then says that at some time in the next 50 years, AI is likely to exceed human capabilities. From these, it draws the conclusion that what malicious AI can do can be bunched broadly into three groupings: one, that it can expand existing threats; two, that it can introduce new threats and three, that it can alter the typical character of threats.

Under the heading of expanding existing threats, it counts “spear" phishing security attacks where the attacker can pose as one of the target’s friends thereby causing the target to willingly part with personal, sensitive and financially valuable information. Phishing attacks today are rudimentary, and rely on both the gullibility and the greed of the target such as the various—and now famous—West African scams which promise to send impossibly large sums of money into your bank account. “Spear" phishing, by contrast, is more accurate since it obviates low gullibility by posing as a trusted person. Spear phishing attempts today rely on a significant amount of skilled labour, since each target needs to be researched in detail, including getting to know about their net worth, their work networks, family circles and so on. Much of this skilled labour can now be performed by AI.

The report then moves on to the area of the psychological distance and increasing anonymity to claim that this aspect of AI can induce certain actors to more willingly take part in attacks. I am unsure of the intracranial workings of this realm, yet I think the report’s stance is plausible.

New threats come about as new areas, which are unbounded by human capabilities, are conquered by AI. For instance, Kurakin’s finding that a few pixels can be changed in order to maliciously trick an image recognition system by causing a self- driving car’s systems to think, unlike a human would, that a stop sign is something else altogether, thereby causing a disaster. This can be replicated across several servers since AI is scalable and efficient, to make an entire fleet of such vehicles to concurrently have similar accidents.

The report goes back to the unknown changes that may be caused by psychological distance, as well as AI’s properties of efficiency and scalability to address how the nature of the attacks may themselves change significantly in the near future and alter the typical character of today’s threats. It goes on to list several threats to digital security, physical security (repurposing drones into image recognition-enabled attack vehicles, for example) and political security, where it dwells on both state-sponsored and non-state sponsored actions that may cause political instability.

To be fair, the report also provides several recommendations on how we might avoid this Armageddon, grouped into four areas:

■ Policymakers and technical researchers to work together to understand and prepare for the malicious use of AI

■ A realization that, while AI has many positive applications, it is a dual-use technology, and AI researchers and engineers should be mindful of and proactive about the potential for its misuse

■ Best practices that can and should be learnt from disciplines with a longer history of handling dual use risks, such as computer security

■ An active expansion of the range of stakeholders engaging with, preventing and mitigating the risks of malicious use of AI

Given the scale of the threats being discussed, and the massive reputation of the experts who conducted the study, I must confess that I am momentarily at a loss for words on how to explain it all, especially to the next generation. Is this what we will be leaving for them?

Should AI research be on the same pedestal as research into the cloning of humans, with which, by the by, it shares many ethical characteristics?

Siddharth Pai is a world-renowned technology consultant who has personally led over $20 billion in complex first-of-a-kind outsourcing transactions.

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Published: 27 Feb 2018, 12:21 AM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App