Active Stocks
Thu Mar 28 2024 15:59:33
  1. Tata Steel share price
  2. 155.90 2.00%
  1. ICICI Bank share price
  2. 1,095.75 1.08%
  1. HDFC Bank share price
  2. 1,448.20 0.52%
  1. ITC share price
  2. 428.55 0.13%
  1. Power Grid Corporation Of India share price
  2. 277.05 2.21%
Business News/ Industry / Infotech/  Google’s DeepMind finds way to overcome AI’s forgetfulness problem
BackBack

Google’s DeepMind finds way to overcome AI’s forgetfulness problem

The breakthrough for DeepMind may open the way for artificial intelligence systems to be more easily applied to multiple tasks, instead of being narrowly trained for one purpose

The researchers created an algorithm that computes how important each connection in a neural network is to the task it has just learned and then assigns this connection a mathematical weight proportional to its importance. Photo: iStockphotoPremium
The researchers created an algorithm that computes how important each connection in a neural network is to the task it has just learned and then assigns this connection a mathematical weight proportional to its importance. Photo: iStockphoto

London: DeepMind, the London-based artificial intelligence (AI) company owned by Alphabet Inc., claims it overcame a key limitation affecting one of the most promising machine learning technologies: the software’s inability to remember.

The breakthrough, described in a paper published Tuesday in the academic journal Proceedings of the National Academy of Sciences, may open the way for artificial intelligence systems to be more easily applied to multiple tasks, instead of being narrowly trained for one purpose. It should also improve the ability of AI systems to transfer knowledge between tasks and to master a sequence of linked steps.

ALSO READ | AI scientists gather to plot doomsday scenarios and how to stop them

Neural networks, software which is loosely based on the structure of synapses in the human brain, are considered the best machine learning technique for language translation, image classification and image generation. But these networks suffer from a major flaw scientists call “catastrophic forgetting." They exist in a kind of perpetual present: every time the network is given new data, it overwrites what it has previously learned.

In human brains, neuroscientists believe that one way in which memory works is that connections between neurons that seem important for a particular skill become less likely to be rewired. The DeepMind researchers drew on this theory, known as synaptic consolidation, to create a way to allow neural networks to remember. They worked with Claudia Clopath, a neuroscientist at London’s Imperial College, who is a co-author on the paper.

“We have shown it is possible to train a neural network sequentially, which was previously thought to be a fundamental limitation," James Kirkpatrick, a DeepMind researcher who was the lead author on the paper, said in an interview.

ALSO READ | Machine consciousness, sentience and mind

The researchers created an algorithm—called Elastic Weight Consolidation or EWC—that computes how important each connection in a neural network is to the task it has just learned and then assigns this connection a mathematical weight proportional to its importance. The weight slows down the rate at which the value of that particular node in the neural network can be altered. In this way, the network is able to retain knowledge while learning a new task.

The researchers tested the algorithm on ten classic Atari games, which the neural network had to learn to play from scratch. DeepMind had previously created an AI agent able to play these games as well or better than any human player. But that earlier AI could only learn one game at a time. If it was later shown one of the first games it learned, it had to start all over again.

ALSO READ | Google DeepMind researches why robots kill or cooperate

The new EWC-enabled software was able to learn all ten games and, on average, come close to human-level performance on all of them. It did not, however, perform as well as a neural network trained specifically for just one game, the researchers wrote.

Peter Dayan, a professor of neuroscience at University College London, said DeepMind’s paper might benefit neuroscientists in thinking about how synaptic consolidation works. He said studies by psychologist John Pearce, at Cardiff University, seemed to indicate a very similar mechanism determined which synapses were available or unavailable for new learning.

DeepMind, which Alphabet purchased for £400 million ($486.4 million) in 2014, is best known for having created AI software able to beat the world’s best players at the ancient Asian strategy game Go. That achievement was considered a major milestone in computer science because Go has so many possible moves that a computer cannot simply figure out the best move in every situation and instead must rely on something more akin to intuition—making educated guesses based on its own experience. Bloomberg

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

Catch all the Industry News, Banking News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Published: 15 Mar 2017, 06:24 PM IST
Next Story footLogo
Recommended For You
Infotech Stocks
₹1,547.25-0.26%
₹1,484.10.99%
₹4,928.750.15%
₹3,837.51.2%
₹472.21.66%
Switch to the Mint app for fast and personalized news - Get App