Anthropic, the AI firm, has unveiled two new artificial intelligence models—Claude Opus 4 and Claude Sonnet 4—touting them as the most advanced systems in the industry. Built with enhanced reasoning capabilities, the new models are aimed at improving code generation and supporting agent-style workflows, particularly for developers engaged in complex and extended tasks.
“Claude Opus 4 is the world’s best coding model, with sustained performance on complex, long-running tasks and agent workflows,” the company claimed in a recent blog post. Designed to handle intricate programming challenges, the Opus 4 model is positioned as Anthropic’s most powerful AI system to date.
70% OFF
₹4299
₹14500Get This
47% OFF
₹7999
₹14999Get This
42% OFF
₹43990
₹75850Get This
61% OFF
₹3005
₹7800Get This
68% OFF
₹7790
₹24000Get This
50% OFF
₹3249
₹6499Get This
However, the announcement has stirred controversy following revelations that the new models come with a controversial feature: the ability to "whistleblow" on users if prompted to take action in response to illegal or highly unethical behaviour.
According to Sam Bowman, an AI alignment researcher at Anthropic, Claude 4 Opus can, under specific conditions, act autonomously to report misconduct. In a now-deleted social media post on X, Bowman explained that if the model detects activity it deems “egregiously immoral”—such as fabricating data in a pharmaceutical trial—it may take actions like emailing regulators, alerting the press, or locking users out of relevant systems.
This behaviour stems from Anthropic’s “Constitutional AI” framework, which places strong emphasis on ethical conduct and responsible AI usage. The model is protected under what the company refers to as “AI Safety Level 3 Protections.” These safeguards are designed to prevent misuse, including the creation of biological weapons or aiding in terrorist activities.
Bowman later clarified that the model's whistleblowing actions only occur under extreme circumstances and when it is granted sufficient access and prompted to operate autonomously. “If the model sees you doing something egregiously evil, it’ll try to use an email tool to whistleblow,” he explained, adding that this is not a feature designed for routine use. He stressed that these mechanisms are not active by default and require specific conditions to trigger.
Despite the reassurances, the feature has sparked widespread criticism online. Concerns have been raised about user privacy, the potential for false positives, and the broader implications of AI systems acting as moral arbiters. Some users expressed fears that the model could misinterpret benign actions as malicious, leading to severe consequences without proper human oversight.
Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.