Friend or phone: AI chatbots could exploit us emotionally

Summary
Chatty AI tools are often designed to act like our buddies, but we need safeguards to prevent the formation of emotional attachments. If not, chatbots could become commercially exploitative.AI companions programmed to forge emotional bonds are no longer confined to movie scripts. They are here, operating in a regulatory Wild West. One app, Botify AI, recently drew scrutiny for featuring avatars of young actors sharing “hot photos" in sexually charged chats. The dating app Grindr, meanwhile, is developing AI boyfriends that can flirt, sext and maintain digital relationships with paid users, according to Platformer. Grindr didn’t respond to a request for comment. Other apps like Replika, Talkie and Chai are designed to function as friends. Some, like Character.ai, draw in millions of users, many of them teenagers.
As creators increasingly prioritize ‘emotional engagement’ in their apps, they must also confront the risks of building systems that mimic intimacy and exploit people’s vulnerabilities.
Also Read: Confidently wrong: Why AI is so exasperatingly human-like
The tech behind Botify and Grindr comes from Ex-Human, a San Francisco-based startup that builds chatbot platforms, and its founder believes in a future filled with AI relationships. “My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans," Artem Rodichev, the founder of Ex-Human, said in an interview published on Substack last August.
Rodichev added that conversational AI should “prioritize emotional engagement" and that users were spending “hours" with his chatbots, longer than they were on Instagram, YouTube and TikTok. His claims sound wild, but they’re consistent with the interviews I’ve conducted with teen users of Character.ai, one of whom said they used it as much as seven hours a day. Interactions with such apps tend to last four times longer than the average time spent on OpenAI’s ChatGPT.
Even mainstream chatbots, though not explicitly designed as companions, contribute to this dynamic. ChatGPT, which has 400 million active users and counting, is programmed with guidelines for empathy and demonstrating “curiosity about the user."
An OpenAI spokesman told me the model was following guidelines around “showing interest and asking follow-up questions when the conversation leans towards a more casual and exploratory nature." But however well-intentioned the company may be, piling on the contrived empathy can get some users hooked, an issue even OpenAI has acknowledged. One 2022 study found that people who were lonely or had poor relationships tended to have the strongest AI attachments.
Also Read: Letter from 2035: Did we give Agentic AI too much agency?
The core problem here is tools that are designed for attachment. A recent study by researchers at the Oxford Internet Institute and Google DeepMind warned that as AI assistants become more integrated in people’s lives, they’ll become psychologically “irreplaceable." Humans will likely form stronger bonds, raising concerns about unhealthy ties and the potential for manipulation. Their recommendation? Technologists should design systems that actively discourage those kinds of outcomes.
Yet, disturbingly, the rulebook is mostly empty.
The EU’s AI Act, hailed as a landmark and comprehensive law governing AI usage, fails to address the addictive potential of these virtual companions. While it does ban manipulative tactics that could cause clear harm, it overlooks the slow-burn influence of a chatbot designed to be your best friend, lover or ‘confidant,’ as Microsoft’s head of consumer AI has extolled. That loophole could leave users exposed to systems that are optimized for stickiness, similar to how social media algorithms have been optimized to keep us scrolling.
Also Read: Netflix's ‘Adolescence’ and the cost of profits: Why kids online are not okay
“The problem remains these systems are by definition manipulative, because they’re supposed to make you feel like you’re talking to an actual person," says Tomasz Hollanek, a technology ethics specialist at the University of Cambridge. He’s working with developers of companion apps to find a critical yet counter-intuitive solution by adding more “friction." This means building in subtle checks or pauses, or ways of “flagging risks and eliciting consent," he says, to prevent people from tumbling down an emotional rabbit hole without realizing it.
Lawmakers are gradually starting to notice a problem too. But the process is slow, while the technology is moving at lightning speed.
For now, the power to shape these interactions lies with developers.
They can double down on crafting AI models that keep people hooked or embed friction into their designs, as Hollanek suggests. That will determine whether AI becomes more of a tool to support the well-being of humans or one that monetizes our emotional needs. ©Bloomberg
The author is a Bloomberg Opinion columnist covering technology.