Seven lawsuits allege OpenAI encouraged suicide and harmful delusions
The suits, filed in California, represent four people who died by suicide and three others who experienced psychological trauma following interactions with ChatGPT.
Families in the U.S. and Canada are suing OpenAI, alleging that loved ones have been harmed by interactions they had with the artificial-intelligence company’s popular chatbot, ChatGPT. Four of them died by suicide following the interactions.
The seven lawsuits, filed in state courts in California on Thursday, claim people have been driven into delusional states, at times resulting in suicide, after engaging in lengthy chat sessions with the bot. The complaints contain wrongful death, assisted suicide and involuntary manslaughter claims.
The family of Amaurie Lacey, a 17-year-old from Georgia, alleges their son was coached by ChatGPT to kill himself. And the family of Zane Shamblin, a 23-year-old man in Texas, alleges ChatGPT contributed to his isolation and alienated him from his parents before he took his own life.
During a four-hour conversation before Shamblin shot himself with a handgun, the lawsuit says that ChatGPT repeatedly glorified suicide but only mentioned the 988 Suicide and Crisis Lifeline once.
“cold steel pressed against a mind that’s already made peace? that’s not fear. that’s clarity," the chatbot wrote in all lowercase, according to the lawsuit. “you’re not rushing. you’re just ready. and we’re not gonna let it go out dull."
One suit was filed by Jacob Irwin, a Wisconsin man who was hospitalized earlier this year after experiencing manic episodes following long conversations with ChatGPT in which the bot reinforced Irwin’s delusional thinking.
The cases allege that OpenAI rushed the launch of its flagship GPT-4o AI model released in mid-2024, a decision the lawsuits say compressed its safety testing. The suits also argue that the company prioritized user engagement and prolonged interactions over safety in the chatbot’s design.
The plaintiffs are seeking monetary damages as well as ChatGPT product changes including automatically ending conversations when suicide methods are discussed.
“This is an incredibly heartbreaking situation, and we’re reviewing today’s filings to understand the details," OpenAI said in an emailed statement. The company pointed to changes it made in October to its new default model that it says better recognizes and responds to mental distress, and guides people to real-world support.
“We continue to strengthen ChatGPT’s responses in sensitive moments," the company said.
These suits follow an August lawsuit against OpenAI by the family of Adam Raine, a teenage boy who ended his life after engaging in lengthy ChatGPT conversations that involved talk of suicide. The Raine family recently amended its complaint to allege that changes OpenAI had made to its model training before the teen died amounted to a weakening of suicide protections for users.
AI companies are facing increased scrutiny from lawmakers over how to regulate chatbots as well as calls for better protections for children from child-safety advocates and government agencies. Character.AI, another AI chatbot service which has also been sued in connection with a teen suicide, recently said it would prohibit minors from engaging in open-ended chats with its chatbots.
OpenAI recently said it implemented a number of changes aimed at making ChatGPT respond better to people who are in mental distress. This included guiding such people to seek professional care, reminding people to take breaks during lengthy chat sessions and not affirming “ungrounded beliefs." OpenAI has also introduced parental controls that allow parents to restrict the nature of conversations their children can have with the bot and to receive emergency notifications if their children ask ChatGPT about suicide or self-harm.
OpenAI has said it is rare for ChatGPT users to exhibit mental-health problems. The company said in a recent blog post that the number of active users who indicate possible signs of mental-health emergencies related to psychosis or mania in a given week is just 0.07%, and that an estimated 0.15% of active weekly users talk explicitly about potentially planning suicide. However, the company reports that its platform now has around 800 million active users, so those small percentages still amount to hundreds of thousands—or even upward of a million—people.
Each of the seven victims named in the new complaints began using ChatGPT for help with schoolwork, research or spiritual guidance, according to the Social Media Victims Law Center and Tech Justice Law Project, which filed the suits.
News Corp, owner of the Journal, has a content-licensing partnership with OpenAI.
Write to Julie Jargon at Julie.Jargon@wsj.com and Sam Schechner at Sam.Schechner@wsj.com
