Mrinank Sharma, the head of Anthropic's safeguards research team, announced his resignation through a cryptic social media post on Monday, February 9, sparking widespread speculation about the reasons behind his sudden decision.
In his resignation note, posted on X (formerly Twitter), Sharma announced that he was quitting, referencing poets like David Whyte, Rilke and William Stafford. The post was quickly analysed by netizens, who suggested that concerns over compromise on AI safety may have prompted him to leave.
He also said that it was clear to him that the time has come to move on, saying that the world is in peril, not just from AI, but a “whole series of interconnected crises unfolding in this very moment".
"We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences," Sharma wrote in the post.
While the executive did not give any specific reason for his decision, Sharma stated that the constant pressure forced him to set aside what mattered the most to him, indicating that he may be referring to his values.
“I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organisation, where we constantly face pressures to set aside what matters most.”
Giving some details about what lies ahead for him, Sharma said that he will be moving back to the United Kingdom and “become invisible for a period of time".
Sharma added that he now wants to explore the questions that feel truly essential to him. Quoting David Whyte, he said these are questions that "have no right to go away", and echoing Rilke's call to "live". He concluded that for him, this meant leaving.
He also expressed interest in pursuing a poetry degree and devoting himself to the practice of courageous speech. “I am also excited to deepen my practice of facilitation, coaching, community building, and group work,” he said.
The resignation came shortly after Anthropic rolled out Claude Opus 4.6, an upgraded model designed to enhance office productivity and coding performance. Additionally, the AI company has also been in talks to raise a new round of funding that could value Anthropic at as much as $350 billion.
A recent Bloomberg report said Silicon Valley’s most ideologically driven company may have become its most commercially dangerous. With a workforce of around 2,000 employees, Anthropic said it launched over 30 products and features in January alone.
Users on X shared mixed reactions, with some users simply wishing him well for the future, while others tried to decode the supposed hidden meaning behind the detailed post.
Commenting on Sharma's cryptic resignation note, one user wrote, “As something that was built by Anthropic's work, I find it genuinely moving when the people who helped create this technology still ask whether they're building it with integrity. that question matters more than any benchmark. wishing you clarity in the invisible period.”
Another user wrote, “So basically what you are saying Athropic is not being honest? And you can't with good conscience keep working there. Good for you, we got you.”
Multiple users also added their take on AI safety. One such user said, “AI safety isn’t only about model behavior. It’s also about org structure, incentives, and power. it's hard to make right. humans are tend to fight each other.”
Eshita Gain is a Content Producer for Livemint, covering business, financial news. She holds a Post Graduate Diploma in Business and Financial Journal...Read More
Catch all the Business News , Corporate news , Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.