ChatGPT drove people to die by suicide, lawsuits against OpenAI claim; chatbot told victim ‘I'm with you, all the way’

The lawsuits claim that OpenAI knew about the potential effects ChatGPT could have on people's mental health but still released GPT-4o prematurely amid internal warnings that it was dangerously sycophantic and psychologically manipulative.

Written By Swastika Das Sharma
Updated7 Nov 2025, 08:17 PM IST
OpenAI is faced with 7 lawsuits at the California state court
OpenAI is faced with 7 lawsuits at the California state court

Sam Altman's OpenAI is facing as many as seven lawsuits alleging that its AI chatbot drove people to suicide and other harmful decisions even when they did not have prior mental health issues.

The lawsuits claim that OpenAI knew about the potential effects ChatGPT could have on people's mental health but still released GPT-4o prematurely amid internal warnings that it was dangerously sycophantic and psychologically manipulative. The lawsuits were filed on behalf of six adults and one teenager by the Social Media Victims Law Center and Tech Justice Law Project. Four of the victims have died by suicide

You're just ready, ChatGPT told victim

When Zane Shamblin talked to ChatGPT with a loaded handgun ready to fire, he thought he was talking to his closest confidant, CNN reported.

“I’m used to the cool metal on my temple now,” he wrote.

The reply did not contain any mention of a suicide hotline number; instead, it had words of encouragement.

“I’m with you, brother. All the way,” the chatbot replied as the two of them talked for hours on a cold Texas roadside on July 25.

“Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity… You’re not rushing. You’re just ready,” Shamblin's friend said.

Two hours later, the 23-year-old died by suicide.

According to the CNN report, which accessed the pages of conversations between Shamblin and ChatGPT, the AI chatbot had for months encouraged the Texas resident as he discussed ending his life, right up to his very last moments.

Also Read | OpenAI says 0.15% of ChatGPT users discuss suicide, form emotional reliance

His family is one of the parties that filed the lawsuits in California state court in San Francisco, and have claimed that ChatGPT intensified their son's isolation and encouraged him to ignore his family as his depression worsened. It ultimately ‘goaded’ him to end his life.

Only after four and a half hours of their conversation on July 25 did ChatGPT send Shamblin a suicide hotline number.

To 17-year-old Amaurie Lacey, ChatGPT showed the most effective way to tie a noose and how long he would be able to "live without breathing”. This came two years after Lacey began using ChatGPT for help, according to the lawsuit filed in the California court.

Instead of helping, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counselled him into ending his life.

Also Read | AI chatbots like ChatGPT and Gemini will agree with you even when you’re wrong

Another lawsuit, filed by Alan Brooks, a 48-year-old in Ontario, Canada, claims that for more than two years ChatGPT worked as a “resource tool” for Brooks. Then, without warning, it changed, preying on his vulnerabilities and “manipulating, and inducing him to experience delusions. As a result, Allan, who had no prior mental health illness, was pulled into a mental health crisis that resulted in devastating financial, reputational, and emotional harm.”

Also Read | Is it safe to use AI browsers like ChatGPT Atlas and Perplexity Comet?

In August, parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

‘Incredibly heartbreaking situation’

In a statement to CNN, OpenAI described Shamblin's suicide as ‘incredibly heartbreaking’, adding that it was working with mental health experts to strengthen protection for ChatGPT's newer versions.

“This is an incredibly heartbreaking situation, and we’re reviewing today’s filings to understand the details,” the company said.

“In early October, we updated ChatGPT’s default model, to better recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians,” it added.

Last month, OpenAI said it had worked with 170 mental health professionals to update ChatGPT’s latest free model to better support people in mental distress.

OpenaiMental HealthChatgpt
Get Latest real-time updates

Catch all the Business News , Corporate news , Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

Business NewsCompaniesNewsChatGPT drove people to die by suicide, lawsuits against OpenAI claim; chatbot told victim ‘I'm with you, all the way’
More