Why scientists want to merge human brain cells with AI
Generative artificial intelligence (AI)-powered chatbots are increasingly helping humans with several creative tasks--be it writing, coding, designing, and even helping lawyers and judges in their research.
Sample this. Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, Colombia, made headlines early this year when he used OpenAI’s large language model (LLM)-powered ChatGPT to pose legal questions about the case and included its responses in his decision, according to a court document.
On the flip side, though, a US federal judge on 23 June imposed $5,000 fines on two lawyers and a law firm for relying on ChatGPT for legal research in an aviation injury claim since the AI chatbot invented cases that did not exist.
This, of course, is one of the perils of having blind faith in an evolving technology.
The third iteration of the Generative Pre-trained Transformer (GPT-3) with 175 billion parameters impressed many with its potential to write human-like poems, articles, books, tweets, resumes, and even code. The reason is that Large language models (LLMs) like GPT-3 and chatbots like ChatGPT are trained on billions of words from sources like the internet, books, and sources, including Common Crawl and Wikipedia, which makes them more knowledgeable than most humans. OpenAI did not disclose the number of parameters that GPT-4 has.
While it is to have rumoured 100 trillion parameters (dismissed by Sam Altman as “rubbish”), GPT-4 significantly outperforms GPT-3. And OpenAI now has filed a trademark application for ‘GPT-5’ with the US Patent and Trademark Office (USPTO), which covers computer software for generating human speech and text, as well as natural language processing, generation, understanding, and analysis.
Ever since OpenAI publicly released ChatGPT last November, generative AI developments have excited and unnerved people. The reason, as I have pointed out in my earlier newsletters too, is that, unlike traditional AI, Generative AI has the ability to learn the structure of almost any information--be it text, images, video, proteins, DNA, physics, etc.--and generate new content with the help of ‘prompts’.
Some AI experts have called for a six-month moratorium on building such foundation models (which include LLMs), while others have even equated the risk of AI to that of a nuclear war. Others like Yann LeCun and Andrew Ng insist that AI is far from becoming sentient and that the benefits of AI far outweigh its perceived risks.
AI FORGETS: LEARNING FROM THE HUMAN BRAIN
The fact, though, is that whenever we speak about AI using artificial neural networks (ANNs), the ANNs do not really work like the biological human brain that keeps learning new things continually. Current AI, on the other hand, suffers from “catastrophic forgetting”, a major challenge that ANNs face. When they learn a new task, they tend to forget abruptly and entirely what they previously learnt, even as humans overcome this limitation with periods of sleep. Essentially, ANNs tend to overwrite past data with new knowledge.
To address this issue, researchers from the Institute of Computer Science of the Czech Academy of Sciences published a paper last November, which spoke about the use of a spiking neural network model that simulated sensory processing and reinforcement learning in an animal brain, to blend “new task training with periods of off-line reactivation, mimicking biological sleep”.
A team of researchers from Australia are working on a similar task. Last year, the researchers published a peer-reviewed study about the ability of brain cells to perform a simple tennis-like computer game, Pong (an arcade game released in 1972 where two players use paddles that make a ‘pong’ noise when they hit the ball) in the journal Neuron. Their new project, for which they were recently awarded about 600,000 Australian dollars (AUD), involves growing human brain cells in a laboratory dish called the DishBrain system. The idea is to understand how our human brains keep on learning and use that knowledge to develop better AI machines.
The programme, led by associate professor Adeel Razi, from the Turner Institute for Brain and Mental Health, in collaboration with a Melbourne startup Cortical Labs, involves growing around 800,000 brain cells living in a dish, which are then “taught” to perform goal-directed tasks.
This “continual lifelong learning” means machines can acquire new skills without compromising old ones, adapt to changes, and apply previously learned knowledge to new tasks—-all while conserving limited resources such as computing power, memory, and energy. Razi and his team’s research program involves work using lab-grown brain cells embedded in silicon chips, thus merging the fields of AI and synthetic biology “to create programmable biological computing platforms”. According to Razi, the research had implications across multiple fields, including planning, robotics, advanced automation, brain-machine interfaces, and drug discovery.
WHY GUARDRAILS ARE A MUST
That said, what would happen if scientists do manage to teach AI to think like humans? Would it be a step closer to a sentient machine? I had written this piece: Is AI approaching sentience and should we worry? I wrote this piece on 21 June 2022. But the field of AI, especially with Generative AI, is moving at such an incredible pace that one can only watch this space with their eyes open.
One thing, though, is clear: AI systems need guardrails to prevent them from outsmarting humans. The unbridled run of AI and Generative AI needs a human-centred Responsible AI regulatory framework. Canada has drafted the Artificial Intelligence and Data Act (AIDA), while the US has the AI Bill of Rights and State Initiatives. China’s draft on ‘Administrative Measures for Generative AI Services’ is open for public consultation, while Brazil and Japan, too, have draft regulations in place.
Further, the US government has already secured voluntary commitments from seven leading AI companies--Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI--to help move toward safe, secure, and transparent development of AI technology. As a response to this commitment (or should we call it ‘pressure’), Anthropic, Google, Microsoft, and OpenAI announced the launch of the Frontier Model Forum--an industry body focused on ensuring safe and responsible development of frontier AI models--on 26 July.
India, on its part, is already a founding member of the Global Partnership on Artificial Intelligence (GPAI), which includes countries such as the US, the UK, EU, Australia, Canada, France, Germany, Italy, Japan, Mexico, New Zealand, South Korea, and Singapore. India’s Digital Personal Data Protection Bill has received Cabinet approval and will soon be tabled in Parliament. India’s Digital India Act (DIA), which will replace the IT Act 2000, is expected to regulate AI and intermediaries that are high-risk AI when notified, but it does not have separate legislation for AI as yet. Interestingly, Rajeev Chandrasekhar, minister of state for electronics and information technology, believes that the current state of development of AI is “very task-oriented” and “not sophisticated enough” to warrant any extreme steps. This is a sensible approach to avoid prematurely strangling innovation, especially when an emerging technology like AI is also being harnessed for immense good.
IN NUMBERS & CHARTS:
$4.45 million
The global average cost of a data breach in 2023--a 15% increase over three years: IBM
$1.76 million
The average savings for organizations that use security AI and automation extensively is USD 1.76 million compared to organizations that don’t: IBM
|