Active Stocks
Fri Apr 19 2024 12:35:48
  1. Tata Steel share price
  2. 160.00 0.00%
  1. Tata Motors share price
  2. 955.25 -1.66%
  1. NTPC share price
  2. 348.05 -0.95%
  1. Infosys share price
  2. 1,405.50 -1.06%
  1. ITC share price
  2. 423.20 1.01%
Business News/ News / World/  ‘It’s a dead end’, researchers share their opinion on ChatGPT-4
BackBack

‘It’s a dead end’, researchers share their opinion on ChatGPT-4

Ability to create human like texts, ability to generate images, codes from a mere prompt has humans excited. However, will it also affect the human brain in future?

Google on March 14, 2023, began letting some developers and businesses access the kind of artificial intelligence that has captured attention since the launch of Microsoft-backed ChatGPT last year (AFP)Premium
Google on March 14, 2023, began letting some developers and businesses access the kind of artificial intelligence that has captured attention since the launch of Microsoft-backed ChatGPT last year (AFP)

Open AI's latest release ChatGPT-4, the latest incarnation of the large language model that powers its popular chat bot ChatGPT  has been released with bigger improvements. Ability to create human like texts, ability to generate images, codes from a mere prompt has humans excited. However, will it also affect the human brain in future?

What does scientists think?

May not be useful for research

Researchers have pointed out that the ability to write human like texts, codes have the potential to transform science, but some have not yet been able to access the technology, its underlying code or information on how it was trained. That raises concern about the technology’s safety and makes it less useful for research, say scientists.

Mind Blowing

An article published in journal Nature, quotes a scientist who has seen demos of the ChatGPT 4 and says, “We watched some videos in which they demonstrated capacities and it’s mind blowing". One instance, she recounts, was a hand-drawn doodle of a website, which GPT-4 used to produce the computer code needed to build that website, as a demonstration of the ability to handle images as inputs.

Frustration over secrecy

The researcher world is frustrated because there is extreme secrecy around Open AI's ChatGPT 4. “All of these closed-source models, they are essentially dead-ends in science," Nature quoted Sasha Luccioni, a research scientist specializing in climate at HuggingFace, an open-source-AI community. “They [OpenAI] can keep building upon their research, but for the community at large, it’s a dead end."

At first was not impressed, but….

Another researcher Andrew White, a chemical engineer at University of Rochester, was given access to ChatGPT-4 as a ‘red-teamer’: a person paid by OpenAI to test the platform to try and make it do something bad.

He told Nature that initially he was not impressed with the chatbot. “At first, I was actually not that impressed," White says. “It was really surprising because it would look so realistic, but it would hallucinate an atom here. It would skip a step there," he adds.

However, things changed when he gave GPT-4 access to scientific papers, things changed dramatically. “It made us realize that these models maybe aren’t so great just alone. But when you start connecting them to the Internet to tools like a retrosynthesis planner, or a calculator, all of a sudden, new kinds of abilities emerge."

Fake Facts

Another researcher has pointed out that models like GPT-4, which exist to predict the next word in a sentence, can’t be cured of coming up with fake facts — known as hallucinating. “You can’t rely on these kinds of models because there’s so much hallucination," she says. And this remains a concern in the latest version, she says, although OpenAI says that it has improved safety in GPT-4.

Safety concern

Without access to the data used for training, OpenAI’s assurances about safety fall short for Luccioni. “You don’t know what the data is. So you can’t improve it. I mean, it’s just completely impossible to do science with a model like this," she says.

Biased

The mystery about how GPT-4 was trained is also a concern for van Dis’s colleague at Amsterdam, psychologist Claudi Bockting. “It’s very hard as a human being to be accountable for something that you cannot oversee," she says. “One of the concerns is they could be far more biased than for instance, the bias that human beings have by themselves." Without being able to access the code behind GPT-4 it is impossible to see where the bias might have originated, or to remedy it, Luccioni explains.

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

ABOUT THE AUTHOR
Sayantani Biswas
Sayantani is an editor with Livemint. She covers stories of International and Indian politics, conflict
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Published: 18 Mar 2023, 10:18 PM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App