As artificial intelligence (AI) continues to wiggle its way into various aspects of everyday life, its misuse is a cause of concern. For instance, AI has made it easier and faster to manipulate images and videos to create deepfakes that spread fake news and propaganda, attempting to blur the line between reality and illusion.
Now, researchers have developed a new technique that makes alterations in minuscule pixel values to protect images from manipulation.
Researchers from the Massachusetts Institute of Technology (MIT) have developed PhotoGuard, a technique that uses perturbations, alterations in pixel values that are not visible to the human eye but can be detected by computer models, to interrupt an AI model’s ability to manipulate images, according to a news release on the MIT website.
AI models perceive images as mathematical data points that describe every pixel's colour and position. The new technique uses two methods to trick these models. The first one introduces minor adjustments to the mathematical representation, making the AI model see an image as a random entity. Hence, manipulation becomes nearly impossible, the news release explains. As the changes are invisible to the human eye, the image’s visual integrity remains unaffected.
The second method is more complex. It presents a different image, one that resembles the target image, to the AI model for manipulation. Any changes made are applied to the fake image instead, thus protecting the original one. When these methods are implemented, the final output will lack realism compared to the original one, thus, making it easier to detect.
“A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defence against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,” lead author Hadi Salman said in a statement in the MIT news release.
Salman added that it’s important for policymakers to consider implementing regulations that mandate companies to protect user data from manipulations. “Developers of these AI models could design APIs that automatically add perturbations to users’ images, providing an added layer of protection against unauthorized edits,” he says in the release.
While more tech giants are investing in AI, there is also a concern about its harmful consequences, specifically regarding image manipulation. Some companies have responded to this with updates to ensure authenticity in images and videos. In May 2023, for instance, Google rolled out a new tool, "About this Image," an addition to its search platform that can detect differences between AI-generated and real images.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.