OPEN APP
Home >Science >Health >Scientists decode human brain using AI
The researchers were able to figure out how certain locations in the brain were associated with specific information a person was seeing. Photo: Courtesy Purdue University
The researchers were able to figure out how certain locations in the brain were associated with specific information a person was seeing. Photo: Courtesy Purdue University

Scientists decode human brain using AI

In what could lead to new insights into human brain function, scientists decode what the brain is seeing by using artificial intelligence to interpret fMRI scans from people watching videos

Mumbai: In a newly-published research paper that sharpens focus on the intersection of machine intelligence and neuroscience, Purdue University researchers have demonstrated how to decode what the human brain is seeing by using artificial intelligence (AI) to interpret Functional magnetic resonance imaging (fMRI) scans from people watching videos, representing a sort of mind-reading technology.

The advancement, according to the researchers, could aid efforts to improve AI and lead to new insights into brain function.

Critical to the research, which appeared online on 20 October in the journal Cerebral Cortex, is a type of algorithm called a convolutional neural network. Convolutional neural networks, a form of deep learning algorithm, have been used to study how the brain processes static images and other visual stimuli.

Deep learning itself is an advanced machine learning technique that uses layered (hence “deep") neural networks (neural nets) that are loosely modelled on the human brain. Neural nets enable image recognition, speech recognition, self-driving cars and smart home automation devices, among other things.

The researchers, according to a 23 October press release, acquired 11.5 hours of fMRI data from each of three women subjects watching 972 video clips, including those showing people or animals in action and nature scenes. First, the data were used to train the convolutional neural network model to predict the activity in the brain’s visual cortex while the subjects were watching the videos. Then they used the model to decode fMRI data from the subjects to reconstruct the videos, even ones the model had never watched before.

The model was able to accurately decode the fMRI data into specific image categories. Actual video images were then presented side-by-side with the computer’s interpretation of what the person’s brain saw based on fMRI data. The researchers were able to figure out how certain locations in the brain were associated with specific information a person was seeing.

The researchers also were able to use models trained with data from one human subject to predict and decode the brain activity of a different human subject, a process called cross-subject encoding and decoding. This finding, the researchers say, is important because it demonstrates the potential for broad applications of such models to study brain function, even for people with visual deficits.

Researchers elsewhere, too, are continually doing similar experiments to better understand how the brain works. I had referred to a few such developments in my column last January.

In June 2015, for instance, Robert Leech—a senior lecturer within the Division of Brain Sciences at the Imperial College, London—, and Romy Lorenz—then a Ph.D student at Imperial College London—published a paper on arvix.org in which they said they were developing an alternative framework--the Automatic Neuroscientist.

This concept “turns the standard functional magnetic resonance imaging (FMRI) approach on its head", the paper said. Understanding how cognition and the brain interrelate is a central aim of functional neuroimaging. So the researchers used real-time fMRI images in combination with machine learning techniques to automatically design neuroimaging experiments.

Meanwhile, Neuralink--a startup co-founded by billionaire and Tesla CEO, Elon Musk--is developing ultra high bandwidth brain-machine interfaces to connect humans and computers. The idea is to help human beings embed software to keep pace with advancements in AI.

Similarly, the US-based Defense Advanced Research Projects Agency (DARPA) announced in January 2016 that it is working on the Neural Engineering System Design (NESD), which it hopes will dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.

On 10 October, it said in a press release that it had awarded contracts to five research organizations and one company (Brown University; Columbia University; Fondation Voir et Entendre (The Seeing and Hearing Foundation); John B. Pierce Laboratory; Paradromics, Inc.; and the University of California, Berkeley).

The aim of these contracts is to have these institutions develop the fundamental research and component technologies required to pursue the NESD vision of a high-resolution neural interface and integrate them to create and demonstrate working systems able to support potential future therapies for sensory restoration.

Four of the teams will focus on vision while two will focus on aspects of hearing and speech.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Click here to read the Mint ePaperMint is now on Telegram. Join Mint channel in your Telegram and stay updated with the latest business news.

Close
×
Edit Profile
My Reads Redeem a Gift Card Logout