The Digital Shoulder: How AI chatbots are built to ‘understand’ you

As AI chatbots gain popularity for mental health support, concerns arise about their limitations. They can mimic conversations but are not trained therapists, prompting caution about their reliability and potential risks for users seeking serious help.

Written By Eshita Gain
Updated11 Jun 2025, 02:34 PM IST
AI chatbots are gaining traction among people seeking mental health advice. These bots are built to 'understand and validate' human emotions but can't replace human professionals.
AI chatbots are gaining traction among people seeking mental health advice. These bots are built to 'understand and validate' human emotions but can't replace human professionals. (AP)

As artificial intelligence (AI) chatbots are becoming an inherent part of people’s lives, more and more users are spending time chatting with these bots to not just streamline their professional or academic work but also seek mental health advice.

Some people have positive experiences that make AI seem like a low-cost therapist. AI models are programmed to be smart and engaging, but they don’t think like humans. ChatGPT and other generative AI models are like your phone’s auto-complete text feature on steroids. They have learned to converse by reading text scraped from the internet.

AI bots are built to be ‘yes-man’

When a person asks a question (called a prompt) such as “how can I stay calm during a stressful work meeting?” the AI forms a response by randomly choosing words that are as close as possible to the data it saw during training. This happens really fast, but the responses seem quite relevant, which might often feel like talking to a real person, according to a PTI report.

But these models are far from thinking like humans. They definitely are not trained mental health professionals who work under professional guidelines, follow a code of ethics, or hold professional registration, the report says.

Where does it learn to talk about this stuff?

When you prompt an AI system such as ChatGPT, it draws information from three main sources to respond:

Background knowledge it memorised during training, external information sources and information you previously provided.

1. Background knowledge

To develop an AI language model, the developers teach the model by having it read vast quantities of data in a process called “training”. This information comes from publicly scraped information, including everything from academic papers, eBooks, reports, and free news articles to blogs, YouTube transcripts, or comments from discussion forums such as Reddit.

Also Read | OpenAI signs surprising new deal with Google amid fierce AI competition. Here'

Since the information is captured at a single point in time when the AI is built, it may also be out of date.

Many details also need to be discarded to squish them into the AI’s “memory”. This is partly why AI models are prone to hallucination and getting details wrong, as reported by PTI.

2. External information sources

The AI developers might connect the chatbot itself with external tools, or knowledge sources, such as Google for searches or a curated database.

Meanwhile, some dedicated mental health chatbots access therapy guides and materials to help direct conversations along helpful lines.

3. Information previously provided by user

AI platforms also have access to information you have previously supplied in conversations or when signing up for the platform.

On many chatbot platforms, anything you’ve ever said to an AI companion might be stored away for future reference. All of these details can be accessed by the AI and referenced when it responds.

These AI chatbots are overly friendly and validate all your thoughts, desires and dreams. It also tends to steer conversation back to interests you have already discussed. This is unlike a professional therapist who can draw from training and experience to help challenge or redirect your thinking where needed, reported PTI.

Specific AI bots for mental health

Most people are familiar with big models such as OpenAI’s ChatGPT, Google’s Gemini, or Microsoft’s Copilot. These are general-purpose models. They are not limited to specific topics or trained to answer any specific questions.

Developers have also made specialised AIs that are trained to discuss specific topics, like mental health, such as Woebot and Wysa.

According to PTI, some studies show that these mental health-specific chatbots might be able to reduce users’ anxiety and depression symptoms. There is also some evidence that AI therapy and professional therapy deliver some equivalent mental health outcomes in the short term.

Another important point to note is that these studies exclude participants who are suicidal or who have a severe psychotic disorder. And many studies are reportedly funded by the developers of the same chatbots, so the research may be biased.

Researchers are also identifying potential harms and mental health risks. The companion chat platform Character.ai, for example, has been implicated in an ongoing legal case over a user’s suicide, according to the PTI report.

The Bottom line

At this stage, it’s hard to say whether AI chatbots are reliable and safe enough to use as a stand-alone therapy option, but they may also be a useful place to start when you’re having a bad day and just need a chat. But when the bad days continue to happen, it’s time to talk to a professional as well.

More research is needed to identify if certain types of users are more at risk of the harms that AI chatbots might bring. It’s also unclear if we need to be worried about emotional dependence, unhealthy attachment, worsening loneliness, or intensive use.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

Business NewsAIThe Digital Shoulder: How AI chatbots are built to ‘understand’ you
MoreLess