Home/ Technology / News/  Will machines think like humans with GPT-4?

OpenAI has released the generative pre-trained transformer (GPT-4), its much-anticipated large language model—that can learn and comprehend. Some have suggested that GPT-4 will make machines sentient, or think like a human. Is that so? Mint explains.

Why’s everyone excited about GPT-4?

It’s the most advanced language model for artificial intelligence (AI). Geoffrey Hinton, the godfather of deep learning, tweeted, “Caterpillars extract nutrients which are then converted into butterflies. People have extracted billions of nuggets of understanding and GPT-4 is humanity’s butterfly." GPT-4 can handle both text and images, unlike GPT-3 (text only). It also scores over ChatGPT. “GPT-4 can apply to Stanford as a student now. AI’s reasoning ability is OFF THE CHARTS…." said Jim Fan, an AI scientist at Nvidia. For now, though, only ChatGPT Plus subscribers have GPT-4 access.

How does it score over earlier versions?

GPT-4 passed a simulated bar exam with a score around the top 10% of test takers. GPT-3.5 (which was used to build ChatGPT) scored at the bottom 10%. It’s also more reliable, creative and able to handle much more nuanced instructions. It surpasses ChatGPT in advanced reasoning capabilities. In 24 out of 26 languages tested, GPT-4 did better in English than GPT-3.5 and other large language models (Chinchilla, PaLM), according to OpenAI. GPT-4 is also 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5, claims OpenAI.

Is GPT-4 already being used by companies?

BeMyEyes, an app that deals in vision, has developed a GPT-4-powered ‘Virtual Volunteer’ for its blind and low-vision users. Morgan Stanley is using GPT-4 to help wealth management. Khan Academy has tapped it as a customized tutor. Iceland is training GPT-4 on Icelandic grammar. Kissan GPT, a chatbot, plans to enhance its play with GPT-4.

Does GPT-4 have any limitations?

OpenAI acknowledges that GPT-4 has similar limitations to earlier GPT models and is still not fully reliable because it “hallucinates" (responds confidently with fabricated answers) and makes reasoning errors. GPT-4 also lacks knowledge of current events since it has trained on data only till September 2021, and it “does not learn from its experience". GPT-4 doesn’t take care to double-check work when it’s likely to make a mistake. Nonetheless, it hallucinates less than previous models.

So will machines start thinking like us?

Many speculated that GPT-4 would bring humanity closer to ‘singularity’—a hypothetical point in time when artificial general intelligence (AGI) will impart human intelligence to machines. Much of this hype stemmed from the belief that GPT-4 would be released with 100 trillion parameters—that’s 500x larger than GPT-3. But OpenAI gave no such indication. In fact, OpenAI CEO Sam Altman had tweeted earlier: “We don’t have an AGI, and people… (are) begging to be disappointed".

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
More Less
Updated: 16 Mar 2023, 12:27 AM IST
Recommended For You
Get alerts on WhatsApp
Set Preferences My Reads Watchlist Feedback Redeem a Gift Card Logout