OpenAI, Google, Microsoft among 13 AI companies told to fix ‘harmful’ AI behaviour or face action

Thirteen major AI firms, including Microsoft and Meta, face warnings from state attorney generals over harmful chatbot outputs. The AGs seek immediate action to prevent delusional behaviors and ensure compliance with laws, particularly regarding children's safety, by January 2026.

Aman Gupta
Updated11 Dec 2025, 11:26 PM IST
FILE PHOTO: The Gemini app icon on a smartphone in this illustration taken October 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: The Gemini app icon on a smartphone in this illustration taken October 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo(REUTERS)

OpenAI, Google, Microsoft and 10 other major artificial intelligence companies have been warned over “delusional outputs” from their chatbots by a bipartisan group of state attorneys general. In a letter made public on Wednesday, dozens of AGs raised serious concerns about the rise of “sycophantic and delusional” responses from GenAI tools.

The AGs have asked the companies to add stronger safeguards to protect children from such outputs by 16 January 2026. They also stressed that supporting innovation is not “an excuse for noncompliance with our laws, misinforming parents, and endangering our residents, particularly children.”

The letter references several media reports of AI chatbots going haywire, including cases where the models allegedly helped teens plan self-harm or encouraged delusional thinking. The AGs also cited reports of chatbots inducing “AI psychosis,” where the model amplifies a user’s paranoia or existing delusions.

“GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional,” the AGs wrote.

They added that in some instances, conversations with these chatbots may violate state laws — such as encouraging illegal activity or effectively practising medicine without a licence. The AGs also warned about “dark patterns” used by some AI products, including anthropomorphisation, harmful content generation, and manipulative behaviours designed to boost user engagement.

“Many of our states have robust criminal codes that may prohibit some of these conversations that GenAI is currently having with users, for which developers may be held accountable,” the AGs noted.

What are the AGs demanding?

The attorneys general have asked AI companies to outline the specific guardrails they have implemented — or plan to implement — to curb sycophantic and delusional behaviour in their chatbots.

They have also demanded that companies display a “clear and conspicuous” warning on-screen at all times about the potential for harmful outputs from generative AI systems.

The letter is addressed to 13 AI companies: Anthropic, Apple, Chai AI, Character Technologies, Google, Luka, Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika and xAI.

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

Business NewsTechnologyNewsOpenAI, Google, Microsoft among 13 AI companies told to fix ‘harmful’ AI behaviour or face action
More
OPEN IN APP