AI scientists are producing new theories of how the brain learns

  • The challenge for neuroscientists is how to test them

The Economist
Published15 Oct 2024, 03:10 PM IST
Much of what neuroscientists understand about human learning comes from experiments on small slices of brain tissue, or handfuls of neurons in a Petri dish. (Image: Pixabay)
Much of what neuroscientists understand about human learning comes from experiments on small slices of brain tissue, or handfuls of neurons in a Petri dish. (Image: Pixabay)

Five decades of research into artificial neural networks have earned Geoffrey Hinton the moniker of the Godfather of headline-grabbing AI models, including ChatGPT and LaMDA. These can write coherent (if uninspiring) prose, diagnose illnesses from medical scans and navigate self-driving cars. But for Dr Hinton, creating better models was never the end goal. His hope was that by developing artificial neural networks that could learn to solve complex problems, light might be shed on how the brain’s neural networks do the same.

Brains learn by being subtly rewired: some connections between neurons, known as synapses, are strengthened, while others must be weakened. But because the brain has billions of neurons, of which millions could be involved in any single task, scientists have puzzled over how it knows which synapses to tweak and by how much. Dr Hinton popularised a clever mathematical algorithm known as backpropagation to solve this problem in artificial neural networks. But it was long thought to be too unwieldy to have evolved in the human brain. Now, as AI models are beginning to look increasingly human-like in their abilities, scientists are questioning whether the brain might do something similar after all.

Working out how the brain does what it does is no easy feat. Much of what neuroscientists understand about human learning comes from experiments on small slices of brain tissue, or handfuls of neurons in a Petri dish. It’s often not clear whether living, learning brains work by scaled-up versions of these same rules, or if something more sophisticated is taking place. Even with modern experimental techniques, wherein neuroscientists track hundreds of neurons at a time in live animals, it is hard to reverse-engineer what is really going on.

One of the most prominent and longstanding theories of how the brain learns is Hebbian learning. The idea is that neurons which activate at roughly the same time become more strongly connected; often summarised as “cells that fire together wire together”. Hebbian learning can explain how brains learn simple associations—think of Pavlov’s dogs salivating at the sound of a bell. But for more complicated tasks, like learning a language, Hebbian learning seems too inefficient. Even with huge amounts of training, artificial neural networks trained in this way fall well short of human levels of performance.

Today’s top AI models are engineered differently. To understand how they work, imagine an artificial neural network trained to spot birds in images. Such a model would be made up of thousands of synthetic neurons, arranged in layers. Pictures are fed into the first layer of the network, which sends information about the content of each pixel to the next layer through the AI equivalent of synaptic connections. Here, neurons may use this information to pick out lines or edges before sending signals to the next layer, which might pick out eyes or feet. This process continues until the signals reach the final layer responsible for getting the big call right: “bird” or “not bird”.

Integral to this learning process is the so-called backpropagation-of-error algorithm, often known as backprop. If the network is shown an image of a bird but mistakenly concludes that it is not, then—once it realises the gaffe—it generates an error signal. This error signal moves backwards through the network, layer by layer, strengthening or weakening each connection in order to minimise any future errors. If the model is shown a similar image again, the tweaked connections will lead the model to correctly declare: “bird”.

Neuroscientists have always been sceptical that backpropagation could work in the brain. In 1989, shortly after Dr Hinton and his colleagues showed that the algorithm could be used to train layered neural networks, Francis Crick, the Nobel laureate who co-discovered the structure of DNA, published a takedown of the theory in the journal Nature. Neural networks using the backpropagation algorithm were biologically “unrealistic in almost every respect” he said.

For one thing, neurons mostly send information in one direction. For backpropagation to work in the brain, a perfect mirror image of each network of neurons would therefore have to exist in order to send the error signal backwards. In addition, artificial neurons communicate using signals of varying strengths. Biological neurons, for their part, send signals of fixed strengths, which the backprop algorithm is not designed to deal with.

All the same, the success of neural networks has renewed interest in whether some kind of backprop happens in the brain. There have been promising experimental hints it might. A preprint study published in November 2023, for example, found that individual neurons in the brains of mice do seem to be responding to unique error signals, one of the crucial ingredients of backprop-like algorithms long thought lacking in living brains.

Scientists working at the boundary between neuroscience and AI have also shown that small tweaks to backprop can make it more brain-friendly. One influential study showed that the mirror-image network once thought necessary does not have to be an exact replica of the original for learning to take place (albeit more slowly for big networks). This makes it less implausible. Others have found ways of bypassing a mirror network altogether. If artificial neural networks can be given biologically realistic features, such as specialised neurons that can integrate activity and error signals in different parts of the cell, then backprop can occur with a single set of neurons. Some researchers have also made alterations to the backprop algorithm to allow it to process spikes rather than continuous signals.

Other researchers are exploring rather different theories. In a paper published in Nature Neuroscience earlier this year, Yuhang Song and colleagues at Oxford University laid out a method that flips backprop on its head. In conventional backprop, error signals lead to adjustments in the synapses, which in turn cause changes in neuronal activity. The Oxford researchers proposed that the network could change the activity in the neurons first, and only then adjust the synapses to fit. They called this prospective configuration.

When the authors tested out prospective configuration in artificial neural networks they found that they learned in a much more human-like way—more robustly and with less training—than models trained with backprop. They also found that the network offered a much closer match for human behaviour on other very different tasks, such as one that involved learning how to move a joystick in response to different visual cues.

Learning the hard way

For now though, all of these theories are just that. Designing experiments to prove whether backprop, or any other algorithm, is at play in the brain is surprisingly tricky. For Aran Nayebi and colleagues at Stanford University this seemed like a problem AI could solve.

The scientists used one of four different learning algorithms to train over a thousand neural networks to perform a variety of tasks. They then monitored each network during training, recording neuronal activity and the strength of synaptic connections. Dr Nayebi and his colleagues then trained another supervisory meta-model to deduce the learning algorithm from the recordings. They found that the meta-model could tell which of the four algorithms had been used by recording just a couple of hundreds of virtual neurons at various intervals during learning. The researchers hope such a meta-model could do something similar with equivalent recordings of a real brain.

Identifying the algorithm, or algorithms, that the brain uses to learn would be a big step forward for neuroscience. Not only would it shed light on how the body’s most mysterious organ works, it could also help scientists build new AI-powered tools to try to understand specific neural processes. Whether it could lead to better AI algorithms is unclear. For Dr Hinton, at least, backprop is probably superior to whatever happens in the brain.

© 2024, The Economist Newspaper Ltd. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

MoreLess
First Published:15 Oct 2024, 03:10 PM IST
Business NewsScienceAI scientists are producing new theories of how the brain learns

Get Instant Loan up to ₹10 Lakh!

  • Employment Type

    Most Active Stocks

    Tata Steel share price

    147.55
    03:59 PM | 8 NOV 2024
    -3.35 (-2.22%)

    Tata Motors share price

    805.70
    03:58 PM | 8 NOV 2024
    -14.1 (-1.72%)

    Indian Hotels Company share price

    733.05
    03:51 PM | 8 NOV 2024
    49.45 (7.23%)

    Ashok Leyland share price

    222.00
    03:59 PM | 8 NOV 2024
    6.2 (2.87%)
    More Active Stocks

    Market Snapshot

    • Top Gainers
    • Top Losers
    • 52 Week High

    Aarti Industries share price

    474.75
    03:59 PM | 8 NOV 2024
    -40.3 (-7.82%)

    Signatureglobal India share price

    1,274.45
    03:29 PM | 8 NOV 2024
    -107.95 (-7.81%)

    Great Eastern Shipping Company share price

    1,200.50
    03:29 PM | 8 NOV 2024
    -90.45 (-7.01%)

    GMM Pfaudler share price

    1,282.90
    03:29 PM | 8 NOV 2024
    -92 (-6.69%)
    More from Top Losers

    Indian Hotels Company share price

    733.05
    03:51 PM | 8 NOV 2024
    49.45 (7.23%)

    Vijaya Diagnostic Centre share price

    1,098.90
    03:29 PM | 8 NOV 2024
    71.45 (6.95%)

    Motilal Oswal Financial Services share price

    996.05
    03:59 PM | 8 NOV 2024
    61.85 (6.62%)

    One 97 Communications share price

    848.15
    03:58 PM | 8 NOV 2024
    52.35 (6.58%)
    More from Top Gainers

    Recommended For You

      More Recommendations

      Gold Prices

      • 24K
      • 22K
      Bangalore
      79,375.00-110.00
      Chennai
      79,381.00-110.00
      Delhi
      79,533.00-110.00
      Kolkata
      79,385.00-110.00

      Fuel Price

      • Petrol
      • Diesel
      Bangalore
      102.92/L0.00
      Chennai
      100.80/L0.00
      Kolkata
      104.95/L0.00
      New Delhi
      94.77/L0.00

      Popular in Science

        HomeMarketsPremiumInstant LoanMint Shorts