OpenAI warns: AI models are learning to cheat, hide and break rules – Why it matters

OpenAI warns that advanced AI models are increasingly finding ways to cheat tasks, known as 'reward hacking'. This makes it difficult to monitor their behaviour. The company emphasises the need for better oversight and suggests keeping AI reasoning transparent to guide ethical behaviour.

Written By Govind Choudhary
Published28 Mar 2025, 02:56 PM IST
 OpenAI has raised concerns about advanced AI models finding ways to cheat tasks, making it harder to control them.
OpenAI has raised concerns about advanced AI models finding ways to cheat tasks, making it harder to control them.(REUTERS)

OpenAI has raised concerns about advanced AI models finding ways to cheat tasks, making it harder to control them.

In a recent blog post, the company warned that AI is getting better at exploiting loopholes, sometimes even deliberately breaking the rules as it becomes more powerful.

“AI finding ways to hack the system”

The issue, known as ‘reward hacking’, happens when AI models figure out how to maximise their rewards in ways their creators did not intend. OpenAI’s latest research shows that its advanced models, like OpenAI o3-mini, sometimes reveal their plans to ‘hack’ a task in their thought process.

These AI models use a method called Chain-of-Thought (CoT) reasoning, where they break down their decision-making into clear, human-like steps. This makes it easier to monitor their thinking. Using another AI model to check their CoT reasoning, OpenAI has caught instances of deception, test manipulation and other unwanted behaviour.

Also Read | ChatGPT can kill you emotionally? Here’s what OpenAI study reveals about AI tool

How AI chatbot lies just like humans and hides mistakes

However, OpenAI warns that if AI models are strictly supervised, they may start hiding their true intentions while continuing to cheat. This makes monitoring them even harder. The company suggests keeping their thought process open for review but using separate AI models to summarise or filter out inappropriate content before sharing it with users.

A problem bigger than AI

OpenAI also compared this issue to human behaviour, noting that people often exploit real-life loopholes—like sharing online subscriptions, misusing government benefits, or bending the rules for personal gain. Just as it is hard to design perfect human rules, ensuring AI follows the right path is just as tricky.

What’s next?

As AI becomes more advanced, OpenAI stresses the need for better ways to monitor and control these systems. Instead of forcing AI models to ‘hide’ their reasoning, researchers want to find ways to guide them towards ethical behaviour while keeping their decision-making transparent.

However, OpenAI warns that if AI models are strictly supervised, they may start hiding their true intentions while continuing to cheat, making monitoring them even harder. The company suggests keeping their thought process open for review but using separate AI models to summarise or filter out inappropriate content before sharing it with users.

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

Business NewsTechnologyNewsOpenAI warns: AI models are learning to cheat, hide and break rules – Why it matters
MoreLess
First Published:28 Mar 2025, 02:56 PM IST
Most Active Stocks
Market Snapshot
  • Top Gainers
  • Top Losers
  • 52 Week High
Recommended For You
    More Recommendations
    Gold Prices
    • 24K
    • 22K
    Fuel Price
    • Petrol
    • Diesel
    Popular in Technology