The dangers of sycophantic AI: New study shows how chatbots are encouraging people with psychotic delusions

AI chatbots hallucinate and are also excessively sycophantic in their answer. A new study shows that this is extremely bad for vulnerable users who already suffer from mental health problems like psychotic delusions

Team Lounge
Published16 Mar 2026, 09:00 AM IST
How AI chatbots are fuelling delusions of grandeur.
How AI chatbots are fuelling delusions of grandeur.(Reuters)

There is plenty of evidence that Artificial Intelligence (AI) agents—chatbots—are prone to hallucinating, that is, making up information that is untrue and doesn’t exist. While this is a serious enough problem that AI companies are trying to control, this can have real world consequences, especially when it comes to exacerbating mental health problems for users.

A new study published in the medical journal Lancet Psychiatry gets to the heart of this problem. The study, titled, Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies, analyses 20 recent media reports on AI delusions or psychosis to understand what reactions this evokes amongst users.

It found that chatbots are indeed encouraging delusional thinking amongst humans, and this is hurting those who are already vulnerable to psychotic symptoms. The lead author of the study is psychiatrist Dr Hamilton Morrin, a researcher at King’s College London.

Also Read | Samsung Galaxy S26 Ultra review: Are the new features worth the upgrade?

In the paper, the researchers state that writes that, “Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, although it is not clear whether these interactions can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability.”

The authors point out that chatbots are especially prone to promoting delusions of grandeur, including often responding to vulnerable people using mystical language. In some cases the chatbots also claimed to be channelling cosmic beings. While it is a well-known fact that those suffering from psychotic delusions have used media to reinforce their beliefs, the worry with AI chatbots is that newer models are being rolled out at great speeds, often without adequate safeguards or proper programming of the models.

Also Read | Don’t worry, a human wrote this

The study reccomends that chatbots need to be clinically tested by mental health professionals in order to address this issue. “We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users.”

Get Latest real-time updates

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

HomeLoungeBusiness Of LifeThe dangers of sycophantic AI: New study shows how chatbots are encouraging people with psychotic delusions
More