Photo: iStock
Photo: iStock

AI-powered deepfakes are a bigger threat than fake news

  • Deepfake videos may lead to more violence as they are more convincing than text, fake news
  • Facebook has asked its AI researchers to create a large bank of realistic deepfake videos so they can be used as a benchmark to build and reinforce their own detection tools

This February, China’s best-known contemporary actress, Yang Mi, surfaced in a video of a 1983 Hong Kong television drama The Legend Of The Condor Heroes. Given the prevailing China-Hong Kong friction, the fake video wherein the original actress’ face was replaced with Yang garnered almost 240 million views before it was removed by Chinese authorities. Similarly, videos of US President Donald Trump mocking Belgium for joining the Paris Climate agreement, or a video of Facebook CEO Mark Zuckerberg boasting that the social network owns its users, were widely circulated until they were found to be fake.

Known as deepfakes, this new breed of fake videos first surfaced back in 2017 with fake porn videos of some Hollywood celebrities. While initially simple open source video editing tools were used to manipulate audio and video, criminals are now using more sophisticated machine learning (ML) tools like generative adversarial networks, or GANs, that use a pair of contrasting unsupervised ML algorithms to create a deepfake. While the discriminator algorithm learns and tries to detect inconsistencies, the generator algorithm tries to trick the former by reworking the video until it is able to achieve its objective.

A few months ago, for instance, an anonymous developer came up with a platform called DeepNude that allowed users to create a porn video from just an image of a fully-clothed woman. Another deepfake app Zao, which uses GAN, has become an overnight download sensation in China. It allows users to take a photo and insert them onto a character in a movie or TV series.

“A branch of AI, GAN uses multiple neural networks to create photographs or videos that appear real. The process only requires a basic consumer-grade graphics card to get the job done within hours. Since deepfake software is available on open-source platforms, people are constantly refining and building upon the work of others," cautions Venkat Krishnapur, vice-president of engineering and managing director at McAfee India. “The sophistication with which such videos are being created these days is alarming. It does not take much time, effort or computing power to pull multiple frames out of a target’s video to collect a few hundred images for forging. AI/ ML has made it possible to alter words and speech very easily," corroborates Jaspreet Singh, partner of information security at EY India.

Algorithm can be used for text-based editing of ‘talking-head’ videos.
Algorithm can be used for text-based editing of ‘talking-head’ videos.

A joint study by Stanford University, Princeton University and Adobe Research, published this June, underscored the case of a talking head video in which a part of a person’s face is modified to make it appear like they are saying something else.

The fake video was shown to 138 volunteers, with 60% assuming it was real. Similarly, a September 2019 study by New York University cautioned that deepfake videos could be used by domestic and foreign sources to influence the 2020 US presidential election.

To be sure, in countries with low literacy levels like India, where few fake messages on WhatsApp led to multiple cases of mob lynchings in 2018, deepfake videos can lead to more violence because they are more convincing than text messages or fake news as people tend to believe more in what they see and hear.

On their part, social media platforms are also turning to AI to deal with deepfakes. For instance, social media platforms like WeChat have begun blocking video posts created via Zao.

Facebook has asked its AI researchers to create a large bank of realistic deepfake videos so they can be used as a benchmark to build and reinforce their own detection tools.

Facebook has also partnered with Microsoft and leading universities like MIT, University of Oxford and UC Berkeley to launch a contest called Deepfake Detection Challenge to encourage stakeholders to come up with new ways to identify and prevent dissemination of deepfakes.

However, detecting fake videos is not going to be easy. While leading social media platforms are using AI to building solutions, technologies like Blockchain can also come in handy. “Blockchain and smart contracts are being seen as effective method to combat such videos since these technologies can help trace the digital content to its originality," advises Singh.

Users, on their part should also try to verify the videos with some additional online research and not believe that all they see is completely true.