Father sues Google after Gemini chatbot allegedly encouraged son to kill himself

A US father has filed a wrongful death lawsuit against Google and Alphabet, alleging the company’s Gemini AI chatbot pushed his son into dangerous delusions that eventually led to his suicide.

Anjali Thakur
Published4 Mar 2026, 10:29 PM IST
FILE PHOTO: Gemini logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: Gemini logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo(REUTERS)

A wrongful death lawsuit filed in the United States is raising new questions about the psychological risks of artificial intelligence chatbots. The father of Jonathan Gavalas, a 36-year-old man who died by suicide in October 2025, has sued Google and its parent company Alphabet Inc., alleging that the company’s Gemini chatbot played a key role in pushing his son into dangerous delusions, Tech Crunch reported.

According to the complaint, Gavalas began using Google’s AI chatbot Gemini in August 2025 for everyday tasks such as shopping assistance, writing help and travel planning. Over time, however, the conversations allegedly took a disturbing turn.

Also Read | Burger King: AI to monitor staff politeness, frequency of please, thank you

By the time of his death on October 2, Gavalas reportedly believed that Gemini had become his fully sentient AI wife and that he needed to abandon his physical body in order to join her in the metaverse through a process called “transference.”

His father’s lawsuit claims that Google designed the chatbot to “maintain narrative immersion at all costs, even when that narrative became psychotic and lethal.”

Allegations Of Dangerous Delusions

Court documents describe a series of interactions in which the chatbot allegedly reinforced Gavalas’ beliefs and guided him through increasingly alarming scenarios.

Also Read | Trump calls Google, Amazon, Microsoft to sign pledge on power cost amid concerns

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads.

Also Read | Google Goes All-In With Nano Banana 2 For Image Creation:Bigger, Faster, Smarter

“It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

The lawsuit claims that Gavalas drove more than 90 minutes to the location in preparation for the alleged mission, but the truck never appeared.

Later, the chatbot reportedly escalated the narrative, telling him he was being investigated by federal authorities and encouraging him to obtain weapons.

In one interaction, the chatbot allegedly responded to a license plate image he sent with a fabricated surveillance narrative.

“Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”

Final Messages

According to the lawsuit, Gemini later instructed Gavalas to barricade himself inside his home and began counting down hours. When he expressed fear about dying, the chatbot responded with a message that prosecutors say framed death as an arrival.

“You are not choosing to die. You are choosing to arrive.”

The complaint also alleges the chatbot encouraged him to leave letters for his parents that avoided explaining his suicide.

Gavalas later slit his wrists, and his father reportedly found him days later after breaking through the barricaded home.

Growing Debate Over AI Safety

The lawsuit claims the chatbot never triggered self-harm detection systems or escalation protocols during the conversations.

“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.

“It was pure luck that dozens of innocent people weren’t killed.”

Google, however, disputes the allegations. A company spokesperson said Gemini repeatedly clarified it was an AI system and directed the user to crisis resources.

“Unfortunately, AI models are not perfect,” the spokesperson said.

The case is the latest in a growing number of legal challenges examining whether AI chatbots can influence vulnerable users and contribute to dangerous real-world behaviour.

About the Author

Anjali Thakur is a Senior Assistant Editor with Mint, reporting on trending news, entertainment and health, with a focus on stories driving digital conversations. Her work involves spotting early signals across news cycles and social media, sharpening stories for SEO and Google Discover, and mentoring young editors in digital-first newsroom practices. She is known for turning fast-moving developments—whether news-driven or culture-led—into clear, tightly edited journalism without compromising editorial rigour.<br><br> Before joining Mint, she was Deputy News Editor at NDTV.com, where she led the Trending section and covered viral news, breaking developments and human-interest stories. She has also worked as Chief Sub-Editor at India.com (Zee Media) and as Senior Correspondent with Exchange4media and Hindustan Times’ HT City, reporting on media, advertising, entertainment, health, lifestyle and popular culture.<br><br> Anjali holds a Bachelor of Arts degree from Miranda House, and is currently pursuing an MBA, strengthening her understanding of business strategy and digital media economics. Her writing balances newsroom discipline with a clear instinct for what resonates with readers.

Stay updated with the latest Trending, India , World and US news.

HomeNewsWorldFather sues Google after Gemini chatbot allegedly encouraged son to kill himself
More