Opinion | The deep fakes that publications must guard against
4 min read 03 Aug 2020, 08:49 PM ISTNon-existent people could end up offering op-ed opinions unless we devise tools to filter out such fakes
Last week, we saw four of US’s Big Tech chief executives give evasive answers to sustained grilling by US legislators. The latter have mountains of evidence pointing to the violation of monopoly laws in that country. This covers both external representations and revelations from within each of the firms: Alphabet (Google’s parent), Amazon, Apple and Facebook.
The issues with Google relate to its search results and advertising exchanges, while those with Amazon centre around its decision to compete with retailers that use its online presence as their primary go-to-market channel. The problems with Apple relate to its App Store fees and restricted competition on iPhone. Facebook had to answer queries on its 2012 Instagram acquisition.
Parts of the American press see this as the start of a large comeuppance for US Big Tech. The Washington Post, commenting on the eve of the hearings, said: “Congress brought the country’s big banks to heel after the financial crisis, cowed a tobacco industry for imperiling public health and forced airline leaders to atone for years of treating their passengers poorly. Now, lawmakers are set to turn their attention to technology, channeling long-simmering frustrations with Amazon, Apple, Facebook and Google into a high-profile hearing some Democrats and Republicans hope will usher in sweeping changes throughout Silicon Valley."
The Big Tobacco hearings took place while I lived in the US during the 1990s, and the fallout of those was indeed large, spinning off one of the largest and most financially damaging class-action lawsuits in history. Big Tobacco paid heavily.
The Washington Post and others may well be right about the long term. For now, the day after the hearings, all four companies under scrutiny reported their earnings, and while Google saw its first ever revenue decline, the other three posted surprisingly positive results, causing their stocks to climb rapidly. It would seem that investors and speculators on the stock market have not yet caught up with what the American press sees as a potential reality.
Speaking of the press, however, while Big Tech stocks were surging on 31 July, there was a landmark move down under. Australia is now leading the way in another area of Big Tech regulation; Google and Facebook could be forced to pay Australian news publishers to distribute their content. If approved, a draft code announced by the Australia Competition and Consumer Commission would allow Australian outlets to secure payments in a few months. This seems fair, and hopefully the rest of the world will arrive at a similar view on paying news outlets for the reportage that they work very hard and spend heavily to create.
But there is a technological wrinkle that is beginning to affect news outlets. In a US newspaper, Algemainer, a journalist named Oliver Taylor accused London-based academic Mazen Masri and his wife Ryvka Barnard of being “known terrorist sympathizers". Masri and Barnard are activists who gained prominence for a lawsuit they filed against an Israeli company named NSO Group Technologies whose spyware Pegasus is known for its remote surveillance of smartphones. Taylor also had stories in the Times of Israel and Jerusalem Post.
Understandably, Masri and Barnard were upset. But Masri sensed something awry and reported Taylor to Reuters, which ran an investigative piece (https://reut.rs/2Xnx21v) on Taylor, who claimed to be in his 20s and a student at the University of Birmingham. The University couldn’t find the individual, and so Reuters had the only known photograph of him analysed, and found it was a deep fake. The news agency was unable to trace the journalist. Oliver Taylor simply doesn’t exist.
While some of America’s legislators were railing against Big Tech last week for “censoring" free speech on social media, the spread of misinformation is rapidly becoming one of the internet’s most intractable issues. I have written before on how almost every social media network is on high alert over objectionable video content. Trying to weed out online conspiracies keeps them on the edge too. But no matter how vigilant they are, deep fakes have made it easier for conspirators to spread misinformation.
The perfect way to hide behind a false statement is to employ a deep fake. The problem for content moderators is that deep fakes are almost untraceable. One expert interviewed by Reuters said investigators chasing the origin of such photos are left “searching for a needle in a haystack–except the needle doesn’t exist."
Big Tech recognizes this problem. Even identifying a deep fake, let alone finding its creator, is a difficult task. Facebook got many participants for an open deep fake detection challenge that it ran last year. The company released the results in June 2020. The best deep fake detection model thrown up by the challenge, it turned out, only worked 65% of the time (bit.ly/2Dg5EeZ). Facebook says that this figure of 65% establishes a shared baseline for the artificial intelligence community to work with.
What was once a difficulty only for social media is now a pressing problem for news outlets as well. Publications that insist on factual accuracy and also allow varied voices to be heard may now need barriers to screen out the deep fake Taylors of the world. Being paid for their content by social media networks, as Australia is enforcing, will certainly help defray the costs of vigilance that news publishers have to bear.
Siddharth Pai is founder of Siana Capital, a venture fund management company focused on deep science and tech in India
"Exciting news! Mint is now on WhatsApp Channels 🚀 Subscribe today by clicking the link and stay updated with the latest financial insights!" Click here!