GPT, Claude, Llama? How to tell which AI model is best

  • Beware model-makers marking their own homework

The Economist
Published9 Oct 2024, 05:48 PM IST
Having accurate, reliable benchmarks for AI models matters, and not just for the bragging rights of the firms making them. Image: Pixabay
Having accurate, reliable benchmarks for AI models matters, and not just for the bragging rights of the firms making them. Image: Pixabay

When Meta, the parent company of Facebook, announced its latest open-source large language model (LLM) on July 23rd, it claimed that the most powerful version of Llama 3.1 had “state-of-the-art capabilities that rival the best closed-source models” such as GPT-4o and Claude 3.5 Sonnet. Meta’s announcement included a table, showing the scores achieved by these and other models on a series of popular benchmarks with names such as MMLU, GSM8K and GPQA.

On MMLU, for example, the most powerful version of Llama 3.1 scored 88.6%, against 88.7% for GPT-4o and 88.3% for Claude 3.5 Sonnet, rival models made by OpenAI and Anthropic, two AI startups, respectively. Claude 3.5 Sonnet had itself been unveiled on June 20th, again with a table of impressive benchmark scores. And on July 24th, the day after Llama 3.1’s debut, Mistral, a French AI startup, announced Mistral Large 2, its latest LLM, with—you’ve guessed it—yet another table of benchmarks. Where do such numbers come from, and can they be trusted?

Having accurate, reliable benchmarks for AI models matters, and not just for the bragging rights of the firms making them. Benchmarks “define and drive progress”, telling model-makers where they stand and incentivising them to improve, says Percy Liang of the Institute for Human-Centred Artificial Intelligence at Stanford University. Benchmarks chart the field’s overall progress and show how AI systems compare with humans at specific tasks. They can also help users decide which model to use for a particular job and identify promising new entrants in the space, says Clémentine Fourrier, a specialist in evaluating LLMs at Hugging Face, a startup that provides tools for AI developers.

But, says Dr Fourrier, benchmark scores “should be taken with a pinch of salt”. Model-makers are, in effect, marking their own homework—and then using the results to hype their products and talk up their company valuations. Yet all too often, she says, their grandiose claims fail to match real-world performance, because existing benchmarks, and the ways they are applied, are flawed in various ways.

One problem with benchmarks such as MMLU (massive multi-task language understanding) is that they are simply too easy for today’s models. MMLU was created in 2020 and consists of 15,908 multiple-choice questions, each with four possible answers, across 57 topics including maths, American history, science and law. At the time, most language models scored little better than 25% on MMLU, which is what you would get by picking answers at random; OpenAI’s GPT-3 did best, with a score of 43.9%. But since then, models have improved, with the best now scoring between 88% and 90%.

...

This means it is difficult to draw meaningful distinctions from their scores, a problem known as “saturation” (see chart). “It’s like grading high-school students on middle-school tests,” says Dr Fourrier. More difficult benchmarks have been devised—MMLU-Pro has tougher questions and ten possible answers rather than four. GPQA is like MMLU at PhD level, on selected science topics; today’s best models tend to score between 50% and 60% on it. Another benchmark, MuSR (multi-step soft reasoning), tests reasoning ability using, for example, murder-mystery scenarios. When a person reads such a story and works out who the killer is, they are combining an understanding of motivation with language comprehension and logical deduction. AI models are not so good at this kind of “soft reasoning” over multiple steps. So far, few models score better than random on MuSR.

MMLU also highlights two other problems. One is that the answers in such tests are sometimes wrong. A study carried out by Aryo Gema of the University of Edinburgh and colleagues, published in June, found that, of the questions they sampled, 57% of MMLU’s virology questions and 26% of its logical-fallacy ones contained errors. Some had no correct answer; others had more than one. (The researchers cleaned up the MMLU questions to create a new benchmark, MMLU-Redux.)

Then there is a deeper issue, known as “contamination”. LLMs are trained using data from the internet, which may include the exact questions and answers for MMLU and other benchmarks. Intentionally or not, the models may be cheating, in short, because they have seen the tests in advance. Indeed, some model-makers may deliberately train a model with benchmark data to boost its score. But the score then fails to reflect the model’s true ability. One way to get around this problem is to create “private” benchmarks for which the questions are kept secret, or released only in a tightly controlled manner, to ensure that they are not used for training (GPQA does this). But then only those with access can independently verify a model’s scores.

To complicate matters further, it turns out that small changes in the way questions are posed to models can significantly affect their scores. In a multiple-choice test,asking an AI model to state the answer directly, or to reply with the letter or number corresponding to the correct answer, can produce different results. That affects reproducibility and comparability.

Automated testing systems are now used to test models against benchmarks in a standardised manner. Dr Liang’s team at Stanford has built one such system, called HELM (holistic evaluation of language models), which generates leaderboards showing how a range of models perform on various benchmarks. Dr Fourrier’s team at Hugging Face uses another such system, EleutherAI Harness, to generate leaderboards for open-source models. These leaderboards are more trustworthy than the tables of results provided by model-makers, because the benchmark scores have been generated in a consistent way.

The greatest trick AI ever pulled

As models gain new skills, new benchmarks are being developed to assess them. GAIA, for example, tests AI models on real-world problem-solving. (Some of the answers are kept secret to avoid contamination.) NoCha (novel challenge), announced in June, is a “long context” benchmark consisting of 1,001 questions about 67 recently published English-language novels. The answers depend on having read and understood the whole book, which is supplied to the model as part of the test. Recent novels were chosen because they are unlikely to have been used as training data. Other benchmarks assess models’ ability to solve biology problems or their tendency to hallucinate.

But new benchmarks can be expensive to develop, because they often require human experts to create a detailed set of questions and answers. One answer is to use LLMs themselves to develop new benchmarks. Dr Liang is doing this with a project called AutoBencher, which extracts questions and answers from source documents and identifies the hardest ones.

Anthropic, the startup behind the Claude LLM, has started funding the creation of benchmarks directly, with a particular emphasis on AI safety. “We are super-undersupplied on benchmarks for safety,” says Logan Graham, a researcher at Anthropic. “We are in a dark forest of not knowing what the models are capable of.” On July 1st the company began inviting proposals for new benchmarks, and tools for generating them, which it will co-fund, with a view to making them available to all. This might involve developing ways to assess a model’s ability to develop cyber-attack tools, say, or its willingness to provide advice on making chemical or biological weapons. These benchmarks can then be used to assess the safety of a model before public release.

Historically, says Dr Graham, AI benchmarks have been devised by academics. But as AI is commercialised and deployed in a range of fields, there is a growing need for reliable and specific benchmarks. Startups that specialise in providing AI benchmarks are starting to appear, he notes. “Our goal is to pump-prime the market,” he says, to give researchers, regulators and academics the tools they need to assess the capabilities of AI models, good and bad. The days of AI labs marking their own homework could soon be over.

© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com

 

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

MoreLess
First Published:9 Oct 2024, 05:48 PM IST
Business NewsAIGPT, Claude, Llama? How to tell which AI model is best

Get Instant Loan up to ₹10 Lakh!

  • Employment Type

    Most Active Stocks

    ITC share price

    481.30
    03:52 PM | 6 NOV 2024
    1.25 (0.26%)

    Tata Steel share price

    153.60
    03:59 PM | 6 NOV 2024
    1.3 (0.85%)

    Infosys share price

    1,824.30
    03:58 PM | 6 NOV 2024
    70.5 (4.02%)

    Larsen & Toubro share price

    3,646.65
    03:44 PM | 6 NOV 2024
    71.25 (1.99%)
    More Active Stocks

    Market Snapshot

    • Top Gainers
    • Top Losers
    • 52 Week High

    National Aluminium Company share price

    245.00
    03:55 PM | 6 NOV 2024
    10 (4.26%)

    EPL share price

    270.15
    03:42 PM | 6 NOV 2024
    8.15 (3.11%)

    Firstsource Solutions share price

    380.35
    03:53 PM | 6 NOV 2024
    10.95 (2.96%)

    City Union Bank share price

    178.85
    03:29 PM | 6 NOV 2024
    0.15 (0.08%)
    More from 52 Week High

    Hindustan Zinc share price

    513.15
    03:54 PM | 6 NOV 2024
    -46.3 (-8.28%)

    Great Eastern Shipping Company share price

    1,257.65
    03:53 PM | 6 NOV 2024
    -36.35 (-2.81%)

    Five Star Business Finance share price

    654.85
    03:43 PM | 6 NOV 2024
    -18.8 (-2.79%)

    Triveni Engineering & Indus share price

    406.00
    03:29 PM | 6 NOV 2024
    -11.25 (-2.7%)
    More from Top Losers

    Syrma SGS Technology share price

    554.00
    03:51 PM | 6 NOV 2024
    47.45 (9.37%)

    Dixon Technologies (India) share price

    15,658.95
    03:43 PM | 6 NOV 2024
    1262.95 (8.77%)

    Tejas Networks share price

    1,409.95
    03:55 PM | 6 NOV 2024
    101.7 (7.77%)

    Eclerx Services share price

    3,263.35
    03:52 PM | 6 NOV 2024
    231.45 (7.63%)
    More from Top Gainers

    Recommended For You

      More Recommendations

      Gold Prices

      • 24K
      • 22K
      Bangalore
      80,365.00110.00
      Chennai
      80,371.00110.00
      Delhi
      80,523.00110.00
      Kolkata
      80,375.00110.00

      Fuel Price

      • Petrol
      • Diesel
      Bangalore
      102.92/L0.00
      Chennai
      100.80/L0.00
      Kolkata
      104.95/L0.00
      New Delhi
      94.77/L0.00

      Popular in Ai

        HomeMarketsPremiumInstant LoanMint Shorts