‘It’s a dead end’, researchers share their opinion on ChatGPT-4

  • Ability to create human like texts, ability to generate images, codes from a mere prompt has humans excited. However, will it also affect the human brain in future?

Sayantani Biswas
Updated19 Mar 2023
Google on March 14, 2023, began letting some developers and businesses access the kind of artificial intelligence that has captured attention since the launch of Microsoft-backed ChatGPT last year
Google on March 14, 2023, began letting some developers and businesses access the kind of artificial intelligence that has captured attention since the launch of Microsoft-backed ChatGPT last year(AFP)

Open AI's latest release ChatGPT-4, the latest incarnation of the large language model that powers its popular chat bot ChatGPT  has been released with bigger improvements. Ability to create human like texts, ability to generate images, codes from a mere prompt has humans excited. However, will it also affect the human brain in future?

What does scientists think?

May not be useful for research

Researchers have pointed out that the ability to write human like texts, codes have the potential to transform science, but some have not yet been able to access the technology, its underlying code or information on how it was trained. That raises concern about the technology’s safety and makes it less useful for research, say scientists.

Mind Blowing

An article published in journal Nature, quotes a scientist who has seen demos of the ChatGPT 4 and says, “We watched some videos in which they demonstrated capacities and it’s mind blowing”. One instance, she recounts, was a hand-drawn doodle of a website, which GPT-4 used to produce the computer code needed to build that website, as a demonstration of the ability to handle images as inputs.

Frustration over secrecy

The researcher world is frustrated because there is extreme secrecy around Open AI's ChatGPT 4. “All of these closed-source models, they are essentially dead-ends in science,” Nature quoted Sasha Luccioni, a research scientist specializing in climate at HuggingFace, an open-source-AI community. “They [OpenAI] can keep building upon their research, but for the community at large, it’s a dead end.”

At first was not impressed, but….

Another researcher Andrew White, a chemical engineer at University of Rochester, was given access to ChatGPT-4 as a ‘red-teamer’: a person paid by OpenAI to test the platform to try and make it do something bad.

He told Nature that initially he was not impressed with the chatbot. “At first, I was actually not that impressed,” White says. “It was really surprising because it would look so realistic, but it would hallucinate an atom here. It would skip a step there,” he adds.

However, things changed when he gave GPT-4 access to scientific papers, things changed dramatically. “It made us realize that these models maybe aren’t so great just alone. But when you start connecting them to the Internet to tools like a retrosynthesis planner, or a calculator, all of a sudden, new kinds of abilities emerge.”

Fake Facts

Another researcher has pointed out that models like GPT-4, which exist to predict the next word in a sentence, can’t be cured of coming up with fake facts — known as hallucinating. “You can’t rely on these kinds of models because there’s so much hallucination,” she says. And this remains a concern in the latest version, she says, although OpenAI says that it has improved safety in GPT-4.

Safety concern

Without access to the data used for training, OpenAI’s assurances about safety fall short for Luccioni. “You don’t know what the data is. So you can’t improve it. I mean, it’s just completely impossible to do science with a model like this,” she says.

Biased

The mystery about how GPT-4 was trained is also a concern for van Dis’s colleague at Amsterdam, psychologist Claudi Bockting. “It’s very hard as a human being to be accountable for something that you cannot oversee,” she says. “One of the concerns is they could be far more biased than for instance, the bias that human beings have by themselves.” Without being able to access the code behind GPT-4 it is impossible to see where the bias might have originated, or to remedy it, Luccioni explains.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.MoreLess
HomeNewsworld‘It’s a dead end’, researchers share their opinion on ChatGPT-4

Most Active Stocks

Bharat Electronics

309.55
10:29 AM | 14 JUN 2024
8.6 (2.86%)

Indian Oil Corporation

170.30
10:25 AM | 14 JUN 2024
1.2 (0.71%)

HDFC Bank

1,597.45
10:28 AM | 14 JUN 2024
16.55 (1.05%)

State Bank Of India

840.20
10:29 AM | 14 JUN 2024
-3.7 (-0.44%)
More Active Stocks

Market Snapshot

  • Top Gainers
  • Top Losers
  • 52 Week High

JK Paper

490.70
10:25 AM | 14 JUN 2024
44.35 (9.94%)

KRBL

310.05
10:22 AM | 14 JUN 2024
23.05 (8.03%)

Poly Medicure

2,003.00
09:59 AM | 14 JUN 2024
142.9 (7.68%)

KEC International

935.20
10:25 AM | 14 JUN 2024
66.25 (7.62%)
More from Top Gainers

Recommended For You

    More Recommendations

    Gold Prices

    • 24K
    • 22K
    Bangalore
    73,806.00577.00
    Chennai
    73,806.00148.00
    Delhi
    73,735.00291.00
    Kolkata
    73,088.00-141.00

    Fuel Price

    • Petrol
    • Diesel
    Bangalore
    99.84/L0.00
    Chennai
    100.75/L0.00
    Kolkata
    103.94/L0.00
    New Delhi
    94.72/L0.00
    OPEN IN APP
    HomeMarketsPremiumInstant LoanGet App