Beware: Artificial intelligence can hurt democracy in subtle ways too

A spokesman from Meta said that Channels isn’t a free-for-all broadcast tool and that it could not be used to target specific users. (MINT_PRINT)
A spokesman from Meta said that Channels isn’t a free-for-all broadcast tool and that it could not be used to target specific users. (MINT_PRINT)


  • AI may be disrupting elections right now and we just don’t know it. With the cost of distribution already zero, the cost of content creation has come down too for everyone. Meta may need to tighten its rules for text-heavy platforms like WhatsApp.

This year promises to be a whopper for elective government, with more than 40% of the world’s population able to vote in an election. But nearly five months into 2024, some government officials are wondering why the risk of AI hasn’t apparently played out. Voters in Indonesia and Pakistan have gone to the polls with little evidence seen of viral deepfakes skewing outcomes, according to an article in Politico, which cited “national security officials, tech company executives and outside watchdog groups." AI, they said, wasn’t having the “mass impact" they expected.

That is a painfully short-sighted view. The reason? AI may be disrupting elections right now and we just don’t know it. The problem is that officials are looking for a Machiavellian version of the Balenciaga Pope. Remember the AI-generated images of Pope Francis in a puffer jacket that went viral last year? 

That’s what many expect from generative AI tools that can conjure humanlike text, images and videos, making it just as easy to spot as previous campaigns that supported Donald Trump from Macedonia or spread divisive political content on Twitter and Facebook from Russia. So-called astroturfing was easy to identify when an array of bots was saying the same thing thousands of times.

It is harder to catch someone saying the same thing slightly differently thousands of times, though. That, in a nutshell, is what makes AI-led disinformation so much harder to detect. It’s also why tech firms need to shift focus from “virality to variety," says Josh Lawson, once head of electoral risk at Meta and now a director at Aspen Institute, a think-tank. Don’t forget, he says, the subtle power of words. 

Much of the public discourse on AI has been about images and deepfakes “when we could see the bulk of persuasion campaigns could be based on text. That’s how you can really scale an operation without getting caught."

Meta’s WhatsApp makes that possible thanks to its Channels feature, which can webcast to thousands. You could use an open-source language model to generate and send legions of different text posts to Arabic speakers in Michigan, or message people that their local polling station at a school is flooded and that voting will take long, Lawson adds. “Now something like an Arabic language operation is in reach for as low sophistication as the Proud Boys." A spokesman from Meta said that Channels isn’t a free-for-all broadcast tool and that it could not be used to target specific users.

The other problem is that AI tools are now widely used. Regular people can create and share disinformation. In March, fans of Donald Trump posted AI-generated fake photos of him surrounded by African-American supporters. “It’s ordinary people creating fan content," says Renee DiResta, a Stanford Internet Observatory researcher who specializes in election interference. “Do they mean to be deceptive? Who knows?" 

What matters is that with the cost of distribution already at zero, the cost of creation has come down too for everyone.To tackle this, Meta can’t just try to limit certain images from getting lots of clicks and likes. AI spam doesn’t need engagement to be effective. It just needs to flood the zone.

Meta is trying to address the problem by applying ‘Made with AI’ labels to videos, images and audio on Facebook and Instagram, an approach that could be counter-productive if people begin to assume everything without a label is real. Another approach would be for Meta to focus on WhatsApp. In 2018, a flood of disinformation spread via this platform in Brazil targeting Fernando Haddad of the Workers’ Party. Supporters of Jair Bolsonaro, who won the presidency, were reported to have funded the mass targeting.

Meta could better combat a repeat of that—which AI would put on steroids—if it brought its WhatsApp policies in line with those of Instagram and Facebook, specifically banning content that interferes with the act of voting. WhatsApp’s rules only vaguely ban “content that purposefully deceives" and “illegal activity." 

A Meta spokesman said that this means the company “would enforce on voter or election suppression." But clearer content policies would give Meta more authority to tackle AI spam on WhatsApp channels. You need that “for proactive enforcement," says Lawson. If the company didn’t think that was the case, it wouldn’t have more specific policies against voter interference for Facebook and Instagram.

Smoking guns are rare with AI tools, thanks to their more diffuse and nuanced effects. We should prepare ourselves for more noise than signal as synthetic content pours onto the internet.

That means tech companies and officials should not be complacent about a lack of ‘mass impact’ from AI on elections. Quite the opposite. ©bloomberg

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.