Opinion | The dangers posed by AI-driven ‘deepfakes’

The use of technology to manipulate appearances and voices of people into real-looking footage highlights the need to question the rise of Artificial Intelligence and its misuse

Academics and researchers typically accuse the media of scaremongering and painting dystopian scenarios, especially when it comes to the coverage of Artificial Intelligence (AI)-powered algorithms. They do have a point, especially as machine learning and deep learning algorithms are exactly what power most of the software and smart devices that we use in our daily lives, be they smartphones, cameras, Internet of Things (IoT) devices or voice assistants. Smartphone penetration and advances in image recognition, for instance, are turning phones into powerful at-home diagnostic tools, while these cutting-edge algorithms are helping doctors, researchers and technology companies revolutionize healthcare. Besides, AI in conjunction with IoT (sensors and wearables), robotics, virtual reality (VR) and augmented reality (AR) is playing a very important role. Further, researchers use AI systems to help radiologists improve their ability to diagnose and track prostate cancer. Nvidia researchers have generated synthetic brain MRI images for AI research with the aim of helping doctors learn more about rare brain tumours. Google researchers used a deep learning neural network (a machine learning, or ML, technique, which itself is considered a subset of AI) that trained on retinal images to identify cardiovascular risk factors. Researchers have even begun using ML to decode signals from sensors on the body and translate them into commands that move a prosthetic device.

Given this context, dystopian scenarios certainly seem counterproductive. That said, dissing those who raise critical questions about the evolution of AI algorithms that are becoming increasingly sophisticated smacks of overconfidence. There are genuine concerns. There’s the clear danger that “deepfakes"—AI-powered algorithms that manipulate appearances and voices of people into real-looking footage—pose, besides algorithms that simply listen to voices and generate the face or even likeness of entire bodies of non-existent people. Even those who promote such technology are not spared, the recent deepfake of Facebook founder Mark Zuckerberg being a case in point. Then, in general, there is always the danger of having too much data passing into the wrong hands—be it cybercriminals who can steal our identities or governments that can, and do, use the data to govern through AI our social media habits, introduce policies to instil “moral" behaviour and police people with the help of Face IDs, like it’s done in countries such as China. SenseTime and Megvii’s AI-powered facial recognition systems, for instance, can potentially allow the Chinese government to identify any of its citizens within seconds and also record an individual’s behaviour to predict who might become a threat, reminding one of the pre-crime scene in Minority Report.

To be fair, Face IDs are also changing the way mobile payments are being made in China. It’s not that the media ignores such developments. It’s simply that policies and legislation have always been many steps behind the relentless pace of AI developments, despite the best intentions of companies and governments. Hence, it’s all the more important that the media and other critics continue to voice their concerns and not be seen as impediments in the way of technological progress. Being a Luddite in today’s digital world is not only foolish but counterproductive too. However, the attitude of putting technologies such as AI on a pedestal without adequate privacy and data protection laws and without a sound understanding of what an unsupervised algorithm can achieve is equally dangerous.

Close