AI’s next goal should be telling the authentic from the fake

Actor Rashmika Mandanna was the victim of a recent deepfake video that went viral (Photo: Rashmika Mandanna/X)
Actor Rashmika Mandanna was the victim of a recent deepfake video that went viral (Photo: Rashmika Mandanna/X)

Summary

  • Deepfakes can cause much harm but there’s nothing inherently evil about the technology. It all comes down to how it’s used.

Rashmika Mandanna was recently the subject, or victim, of a video that went viral. The actor’s head was morphed onto the body of another woman, Zara Patel, in a manner that made the video look entirely authentic.

This sort of image manipulation is quite easy, and it is even possible to add dialogue in any desired voice to create what’s known as a deepfake. There are websites that let users upload images or videos to swap heads or voices. Swapping other body parts in convincing fashion is slightly more complicated but also possible.

Minister of state Rajeev Chandrasekhar reacted to the viral Mandanna deepfake by pointing out that platforms have a legal obligation to remove fake or misleading content (or to label it) under the IT Rules 2023. However, that presupposes that the platform realises the content is fake or misleading. This is often not easy.

The “deep" in “deepfake" comes from the phrase “deep learning", which is a method of training AI models. The easiest form is swapping somebody’s head in a still image. Swapping heads – and voices – in videos is more complicated but, as we have seen, also possible. Entire bodies can be swapped, too.

Several open-source models can be used to create deepfakes, which are shared by communities on websites such as Github and Reddit. There’s nothing inherently good or evil about the technology – it all comes down to the way it’s used.

Consider Siri, Alexa, navigation apps, or computer-generated dubbing of YouTube Videos. The voices are those of individuals who provide an initial sample of their voices. Such a sample could be taken in any language. It could be taken apart and put together again to speak the desired words in another language. Extend this concept to images and videos and you have the essence of the deepfake. Some websites claim they need just five seconds of audio to convincingly clone a voice. When it comes to actors and other public figures, there are plenty of images and videos in the public domain, making it possible to have them “say" anything and “do" pretty much anything.

AI has also advanced to the point where you can use footage of a gymnast or a tennis player, for instance, and swap his or her body with someone else’s. Entire websites are dedicated to creating ultra-realistic faces and bodies of people who don’t exist, and entire movies have been made using deepfakes. Elvis Presley and John Lennon have been deepfaked by nostalgic fans. The Star Wars franchise has deepfaked with a younger version of Harrison Ford and new scenes involving the late Carrie Fisher.

There are many legitimate reasons to use deepfakes. A film studio or advertising agency may want to create a movie involving non-existent actors. A movie could be made more realistic with deepfaked stunts. Actors can be aged or made to look younger without the use of makeup.The technology can also be used for satire or humour, though such use should be clearly labelled.

The issue is that deepfakes can also be used to do a lot of harm. The Mandanna Patel video was relatively harmless – there’s much, much worse deepfake content involving Indian celebrities on adult websites. A variation of “revenge porn" uses deepfakes to swap heads in pornographic content to destroy reputations and get people into trouble.

Misleading political, religious and communal content can also be generated. Florida Governor Ron DeSantis has released ads with a deepfaked Donald Trump, for example. Deepfakes can also be used for phishing.

One huge issue therefore, is maliciously fake content. The other is that, as deepfakes become more prevalent and easier to make, it’s becoming harder to easily tell real content from fake. Courts may soon have to set new standards for video evidence from CCTV cameras, for example, and politicians who do actually say outrageous things can then deny saying them and blame deepfakes instead.

The next thrust in AI may well be to develop technologies that help us distinguish the authentic from fake. Until then, humans will have to learn to develop a healthy distrust of what they see online, and start with the assumption that if something seems outlandish, it’s likely to be fake. For now, it’s the only antidote we have.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

MINT SPECIALS