OpenAI says it can now detect images spawned by its software—most of the time

Sam Altman, CEO of OpenAI, at a summit in San Francisco in November. REUTERS/Carlos Barria PHOTO: CARLOS BARRIA/REUTERS
Sam Altman, CEO of OpenAI, at a summit in San Francisco in November. REUTERS/Carlos Barria PHOTO: CARLOS BARRIA/REUTERS

Summary

The startup’s new tool detects 98% of pictures generated by its text-to-image generator DALL-E 3, but success drops if the images are altered.

AI is getting better at recognizing its own work.

OpenAI on Tuesday is launching a new tool that can detect whether an image was created using the company’s text-to-image generator, DALL-E 3. OpenAI officials said that the tool is highly accurate in detecting DALL-E 3 images, but that small changes to a picture can confuse it—reflecting how artificial-intelligence companies are playing catch up in the ability to track their own technology.

A surge of fake images and other media created using generative AI has created confusion about what is and isn’t real, and fueled discussion about the way images are affecting election campaigns in 2024.

Policymakers are concerned that voters are increasingly encountering AI-created images online and the wide availability of tools like DALL-E 3 make it possible to create such content even faster. Other AI startups and tech companies are also building tools to help.

“Election concern is absolutely driving a bunch of this work," said David Robinson, who oversees policy planning for OpenAI. “It’s the number one context of concern that we hear about from policymakers."

OpenAI on Tuesday also said it would join an industry group co-founded by Microsoft and Adobe trying to create content credentials for online images. And OpenAI and Microsoft are launching a $2 million “societal resilience" fund to support AI education.

OpenAI said its new tool is about 98% accurate in detecting content created by DALL-E 3 under most circumstances—if the image isn’t altered. When those images are screenshot or cropped, the classifier is slightly less successful, but can still often make an accurate identification.

The tool’s performance declines further under certain conditions, such as when the hue of those images is changed, said Sandhini Agarwal, an OpenAI researcher focused on policy in an interview. OpenAI is hoping to find fixes to those problems by opening it up to outside researchers, Agarwal added.

OpenAI has been testing its classification tool internally for months. It doesn’t rely on watermarks—signatures many companies intentionally include in AI images, but can often be removed.

While OpenAI’s tool distinguishes whether images are made with DALL-E 3, the company has found that it can be confused when asked to evaluate AI images created from rival products. Changing the hue of those images can also lead to a substantial decline in performance, Agarwal said.

In 0.5% of the time, the tool incorrectly flags non-AI generated images as produced by DALL-E 3, the company said.

Determining whether an image is AI-generated can be easier than carrying out similar screening for text created by AI, researchers from OpenAI and elsewhere have said. In January of last year—two months after the launch of its ChatGPT sparked huge excitement—OpenAI released a tool designed to detect AI-created written work that it said failed to detect bot-written text nearly three-quarters of the time.

OpenAI officials say they are still working on improving that tool.

Write to Deepa Seetharaman at deepa.seetharaman@wsj.com

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS