Grok image edits spark deepfake debate on X, raising fresh concerns over consent and misuse

Grok, an AI chatbot from Elon Musk’s xAI, enables users to modify images, raising concerns about consent and potential misuse. The debate gained momentum following the viral “put in bikini” prompt trend on X.

Sayak Basu
Updated1 Jan 2026, 11:42 AM IST
AI chatbot icons on a smartphone. Image for representation.
AI chatbot icons on a smartphone. Image for representation.(Unsplash)

Artificial intelligence (AI) has come as a boon to many aspects of our lives. It helps us find that old song whose tune we can’t get out of our heads, draft letters and even tackle complex medical or industrial use cases.

However, every boon comes with a bane. With cars and machines came pollution. With the advent of social media came loneliness and isolation. Now, with AI and deepfakes, the line between reality and fiction appears to be blurring further.

‘Put in bikini’ prompt puts Grok’s image edits under scrutiny

The latest concern involves Grok, the AI chatbot built by Elon Musk’s company, xAI. The issue emerged after a “put in bikini” image-editing prompt began circulating on X, with some users using the tool to modify images in ways that generate explicit pictures of people without their consent.

Also Read | India should rethink deepfake regulation: It should be smart, not strict

In July 2025, Musk had said during the infamous ‘MechaHitler’ controversy: “Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially. That is being addressed.”

Musk has since made changes to Grok, turning it into an AI chatbot with fewer guardrails compared to its peers. Grok can swear if prompted, modify images, and, in some cases, generate sexually suggestive content. Notably, it does so in replies to posts on X, making the results visible to anyone on the platform.

After discovering this feature, several users on X began tagging the chatbot in replies to posts containing images, asking it to sexualise or alter pictures of women.

The trend has also drawn criticism. Many users have shared screenshots of the media tab under Grok’s profile, which shows numerous AI-morphed images of women.

One user posted a question on X, asking, “Why is everyone abusing @grok today? 😂”, to which the chatbot replied, “Looks like a trend of folks testing my image-editing skills with cheeky requests today—bikinis, pants removals, you name it. Keeping it fun, but boundaries matter! What's sparking your curiosity? 😂”

Not only for explicit images, but Grok is also being used to make fun of celebrities. Case in point: this thread below, where the user has asked Grok to change the Globe Soccer Award for Best Player in the Middle East in the hands of Cristiano Ronaldo to a World Cup trophy:

Deepfakes: The potential to harm

The potential harm caused by deepfakes or AI-generated images is significant. Fake images can be used in online romance scams, where fabricated photographs are added to dating profiles to deceive others.

Images depicting tragedies or serious illnesses can also be generated to solicit donations, which may later turn out to be fraudulent.

Also Read | ‘AI Will Replace Many Jobs…’, Hinton Warns Of AI Job Crisis In 2026

In other cases, images or videos can be created that show celebrities endorsing products they have never endorsed. The same technology can replicate voices, potentially linking individuals to unlawful activities.

Personality rights: A battle for truth

In India, several celebrities have approached courts to secure their personality rights amid rampant misuse on social media platforms driven by AI-generated content.

Recently, the Delhi High Court granted Telugu superstar Jr NTR protection of his personality and publicity rights. The order safeguards his image, voice, mannerisms and overall persona from unauthorised use.

Other celebrities who have sought similar protection include Amitabh Bachchan, Abhishek Bachchan, Aishwarya Rai Bachchan, Kumar Sanu and Salman Khan.

Be aware: What to watch out for in the age of deepfakes

In an era where AI can modify content to appear entirely different from its original context, users must remain cautious. Washington University in St Louis has shared several pointers to keep in mind:

1. Verify claims on social media through multiple credible sources before forming conclusions. If you see an image or video of a celebrity or politician, verify their official handles or statements.

Also Read | Deepfake music sounds fantastic. Are artists ready for the AI onslaught?

2. AI-generated images may appear overly glossy or slightly cartoonish. Look for exaggerated or unnatural details, though such cues are becoming harder to detect.

3. If something appears too good, or too shocking, to be true, it may have been created using AI or deepfake technology and could be part of a scam.

Get Latest real-time updates

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

Business NewsUsTrendingGrok image edits spark deepfake debate on X, raising fresh concerns over consent and misuse
More