The risk of relying on AI for emotional support
An AI chatbot may seem like an ally but it shuts us off from family and friends and undermines the science of therapy
A 47-year-old male client in therapy says he got into an argument with his partner over something trivial. “Generally, to cool down, I’d go for a long shower or watch something funny. This time, I started chatting with a generic AI bot—our conversation lasted for three hours. I typed the details of the fight, tonality and it kept offering various perspectives."
He says the bot made sense but 30 minutes in, it felt like he was addicted to a conversation that was just affirmed he was right, and he felt agitated. He kept replaying the virtual chat over and over, and was convinced he needed to leave his marriage. “The next morning I realised that my partner and I need to sort things, but I was surprised by how I felt under the spell of a chatbot that I had confused to be my ally."
Over the last year, a lot of clients have been talking to AI bots for guidance and advice when it comes to relationship dynamics and mental health concerns. At schools, colleges and corporate organisations, I hear participants tell me how they have started talking to AI bots when they feel lonely or conflicted.
As a therapist, I feel this trend is a slippery slope. As a country, it has taken us years to start accepting the legitimacy of mental health concerns and the need for licensed professionals. My concern is that this is going to come at the cost of the entire profession being misunderstood.
People will not reach out to trained professionals but start trusting AI for help with their mental health, which will take away from the science of therapy.
From an ethics standpoint, we need to address important questions about confidentiality and vulnerability when we speak to AI. There is absence of confidentiality and in the long run this will come at a price.
Second, we know that children and young adults are at an age where they are far more impressionable and gullible, and chatting with a bot could have implications. Adults who are feeling isolated, struggling with specific personality traits and pre-existing mental health disorders are also at risk of not being able to filter the feedback that is being thrown at them.
Bots usually reinforce the users’ thoughts and ideas, offering what may seem like a safe space. For a vulnerable person, it can lead to extreme rage, anger, acting out, inability to self-soothe and then a dependency where they prefer chats with an AI bot rather than connection with a therapist or loved ones.
A therapist chooses to pay attention to a lot of cues—the pace at which the client is speaking, the choice of words and what is left unsaid, and more. At the same time, as an expert, you choose to tell the client if they are engaging in a thought pattern that feels irrational, obsessive or panicky.
Most importantly, at the heart of therapy is the belief that clients will find their own answers and are capable of self-soothing and reaching out to their loved ones for social soothing.
AI bots seem to be offering not just “answers" but also taking the role of a soothing agent. This will lead to people becoming addicted to AI and shutting out friends and family who may not be communicating in as agreeable ways as AI.
When we choose AI chatbots for guidance, we forget how important connection is for our well-being. It’s the secret sauce that allows us to be seen. AI may meet our need for immediate soothing, but we need to ask ourselves if we are just creating echo chambers.
In the years to come, I hope that we move towards ethical innovation with AI where there is scope for responsibility, clinical research, guard rails that protect users and clear protocol around issues that bots can help with and ones with which they cannot.
Sonali Gupta is a Mumbai-based psychotherapist. She is the author of You Will be Alright: A Guide to Navigating Grief and has a YouTube channel, Mental Health with Sonali.
