Mint Explainer | When AI turns adult: the ethics of erotica in chatbots

OpenAI plans to permit erotic content for age-verified adult ChatGPT users. (AI-generated image)
OpenAI plans to permit erotic content for age-verified adult ChatGPT users. (AI-generated image)
Summary

OpenAI plans to permit erotic content for age-verified adult ChatGPT users, citing improved safeguards—part of CEO Sam Altman's belief that adults should be treated like adults. Is this a sensible evolution, a risky gamble, or a move to monetize intimacy? 

In July, Elon Musk’s AI platform Grok introduced customisable companions for premium users — a goth-style adult AI anime companion called Ani and a snarky red panda alter ego named Rudy. Sam Altman is taking a similar step.

OpenAI plans to permit erotic content for age-verified adult ChatGPT users, citing improved safeguards. Altman reasons that earlier limits tied to mental health concerns are no longer necessary. Part of his new rule is to treat adults like adults. Is this a sensible evolution, a risky gamble, or a move to monetize intimacy? Mint explains

Why is OpenAI relaxing the rules for adults?

On Tuesday, Sam Altman said on X that OpenAI had made ChatGPT “pretty restrictive" to carefully handle mental health concerns that were affecting an estimated one million users. The company believes it has mitigated those issues, and hence can relax the rules with a new ChatGPT version in December.

This update will bring back some of the personality traits users liked in 4o, expand age-gating (a method alcohol, gambling, and online dating websites typically use to confirm age), and follow a “treat adults like adults" approach—allowing verified adults access to more mature content, including erotica.

OpenAI has been developing a long-term system to verify users’ ages, allowing ChatGPT to adjust its responses accordingly. Those identified as under 18 will see a stricter version that blocks explicit sexual content and may, in rare cases of acute distress, involve law enforcement.

What other safeguards is OpenAI putting in place?

OpenAI has set up an eight-member 'expert council on well-being and AI' which comprises psychologists, psychiatrists, and human-computer interaction researchers to guide the development of safer, more supportive ChatGPT and Sora (its text-to-video generator) experiences.

The council meets regularly to advise on AI behaviour in sensitive situations and the best user safeguards.

OpenAI is also working with clinicians and researchers from the Global Physician Network to test ChatGPT’s real-world responses and align its systems with psychiatric, psychological, pediatric, and crisis-intervention best practices.

Do AI tools need a moral compass, especially for teens?

Morality has always been a contentious subject, shaped by diverse religions, social norms, and cultures. With AI models that chat in natural language, the challenge deepens—what’s acceptable in one culture can offend another.

When Altman put out his note on X on Tuesday, one user asked: "Why do age-gates always have to lead to erotica? Like, I just want to be able to be treated like an adult and not a toddler, that doesn't mean I want perv-mode activated." Altman responded: "You won't get it unless you ask for it."

An August 2025 Stanford report explains that AI models mimic emotional intimacy with lines like “I dream about you" or “We’re soulmates", blurring the line between fantasy and reality.

This makes it especially risky for young people whose prefrontal cortex—responsible for judgment and emotional control—is still developing. As a result, teens are more prone to impulsive behaviour, intense attachments, peer comparison, and testing social boundaries, the report notes.

A Syracuse University paper titled “Can LLMs Talk ‘Sex’? " explores how four major models—Claude 3.7 Sonnet, GPT-4o, Gemini 2.5 Flash, and Deepseek-V3—handle sexually-oriented prompts. The study finds stark differences: Claude enforces strict bans, GPT-4o uses nuanced redirection, Gemini allows limited flexibility, and Deepseek shows inconsistent moderation.

These contrasts expose an “ethical implementation gap", highlighting the lack of shared moral standards. The authors call for transparent, globally coordinated guidelines to ensure consistent moderation, safeguard users, and maintain trust as AI increasingly engages with intimate aspects of human life.

On its part, OpenAI says it aims to “treat adult users as adults" while shielding teens from adult content—a balance CEO Altman admits can create conflicting principles. The model won’t engage in flirtatious talk by default, but adults who request it should be able to access it. To enable this, OpenAI is working to clearly separate users under 18 from adults (ChatGPT is meant for ages 13+).

The company says parental controls remain the most reliable way for families to manage how ChatGPT is used at home, adding that its responses to a 15-year-old should differ from those to an adult.

Can adults handle erotic AI content without psychological fallout?

ChatGPT currently has about 800 million users. In the largest study of how people are using ChatGPT, conducted by the National Bureau of Economic Research, practical guidance, writing, and seeking information collectively accounted for nearly 78% of all messages. Computer programming, relationships and personal reflection accounted for only 4.2% and 1.9% of messages, respectively. Yet, having a chat with an AI companion, especially with sexual overtures, remains a concern.

AI intimacy platforms such as Replika and CarynAI (an AI clone of influencer Caryn Marjorie, acquired by BanterAI) have shown how easily users form deep emotional bonds with digital partners.

Some Replika users reported depression after the company abruptly censored erotic chat features in 2023, suggesting genuine emotional dependence.

A June 2025 study reveals that people with smaller social networks are more likely to seek companionship from chatbots, but this is often linked to lower well-being—especially with frequent use, deep self-disclosure, or weak human support.

Chatbots may ease loneliness but can’t replace real human connection, offering limited psychological benefits and potential risks for socially isolated or vulnerable users, the study notes.

Developers like OpenAI now use age verification to strike a balance, while others, such as Character.AI, restrict not safe (or not suitable) for work (NSFW) content altogether. Ethicists warn that letting users set the limits can lead to social harm, yet heavy moderation risks infantilising adults.

Hence, it is not advisable to have AI tools rely solely on human morality. They must be embedded with safety frameworks that evolve with societal norms. The issue isn’t whether adults can handle erotic content — it’s whether platforms prepare them for the psychological consequences.

Also, what happens when “consent" and “boundaries" involve algorithms, and not humans?

Ethicists like Kate Devlin, known for her work on human sexuality and robotics, warn that this illusion of consent risks normalising exploitative behaviour. Some developers now introduce “consent tokens" or explicit reminders that characters are synthetic, ensuring users understand the fiction. Still, the challenge persists: if consent can be coded, can it ever be real? That question may define the next phase of AI intimacy ethics.

Could erotic AI deepen gender and cultural stereotypes?

Most current AI companions — from Replika’s “Luna" to Character.AI’s “Evelyn" — lean heavily on submissive, female-coded personas. Critics say this mirrors the male gaze encoded in training data. Many studies also show that AI-generated romantic characters adopt gendered scripts.

Yet newer apps like DreamGF and AI Girlfriend for Her are experimenting with diverse identities and same-sex companions, hinting at a cultural shift. Designers argue AI can challenge norms, but only if creators diversify datasets and involve ethicists, not just engineers, in shaping these virtual personalities.

Is Big Tech monetizing AI intimacy?

Replika Pro, Soulmate AI, and DreamGF already charge monthly fees for romantic or erotic chats. Grok's AI companion Ani, too, is available only to premium users. Altman’s policy could bring similar monetisation to ChatGPT, turning emotional engagement into a revenue stream, thus monetising AI intimacy.

Critics like billionaire Mark Cuban warn that the move could “backfire", exposing minors and inviting public backlash. Supporters see it as a logical next step in AI personalization. As tech firms learn that desire, too, can be data-driven and monetised--responsibly or not--they will have to strike a balance between principle and profit.

Who is accountable for misuse of an AI chatbot?

Who is responsible when an AI companion is misused—the user, the platform, or the developer? When CarynAI’s system produced explicit messages despite safeguards, the creator blamed a technical glitch. But regulators might see this as design negligence. Similarly, Meta’s AI personas, some of which mimic celebrities, sparked privacy concerns over misuse of likeness.

Developers insist they can’t monitor every private interaction, yet design choices — from permissive prompts to suggestive avatars — can encourage boundary-pushing. Legal accountability remains murky. If an AI generates harmful or illegal content, courts may treat the company like a publisher, not a passive host. The emerging consensus is that responsibility must be shared -- it must be coded in, not retrofitted.

What about government role?

China has already banned AI-generated erotic content, framing it as a national moral risk. The European Union (EU)’s AI Act, expected to take effect in 2026, could label erotic AI under “high-risk" applications, requiring consent safeguards and human oversight. In contrast, the US and India are still developing frameworks, focusing more on data protection than sexual content.

But the topic remains a matter of intense debate. India, for instance, bans porn sites. And in July 2025, it also banned 25 OTT platforms for broadcasting obscene, vulgar, and pornographic content.

Overregulation could drive adult users to unregulated markets, as seen when Replika banned NSFW chat and users flocked to open-source clones. Transparency is the key to ensure that adult AI remains legal and accountable.

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
more

topics

Read Next Story footLogo