AI’s Rapid Growth Threatens to Flood 2024 Campaigns With Fake Videos
Summary
- Millions of people have the tools to create deceptive political content
China invades Taiwan and migrants surge across the U.S.-Mexico border in a video depicting the aftermath of President Biden’s re-election. In a series of images, former President Donald Trump is pursued on foot and apprehended by uniformed police officers. Another photo shows the Pentagon engulfed in flames following an explosion.
The common denominator among these scenes? They are all fake. Rapidly evolving artificial intelligence is making it easier to generate sophisticated videos and images that can deceive viewers and spread misinformation, posing a major threat to political campaigns as 2024 contests get under way.
Phony imagery isn’t new to campaigns. During the 2020 presidential race, Trump shared a fake animation of Biden repeatedly sticking his tongue out with the caption “Sloppy Joe." A slowed-down video that falsely made then-House Speaker Nancy Pelosi appear impaired racked up millions of views in 2019.
What has changed is that synthetic media has become far easier to create with the rollout of so-called generative AI systems that can quickly transform simple inputs into sophisticated-looking videos, photos, music and text. That has left campaign officials bracing for 2024 to usher in a level of digital creation and proliferation unlike any previous election season now that millions of users have access to such tools.
“It’s not going to create brand-new realms or types of disinformation that we’ve never before imagined, but it’s going to make it easier and faster and cheaper to produce," said Teddy Goff, digital director for former President Barack Obama’s re-election campaign. “And the consequences of that are going to be pretty profound."
The Republican National Committee was behind the video portraying a dystopian America should Biden secure a second term. Trump posted a manipulated video of CNN anchor Anderson Cooper reacting to the former president’s appearance at a town hall hosted by the network.
“That was President Donald J. Trump ripping us a new a—— here on CNN’s live presidential town hall," Cooper says in the doctored video.
Trump also shared a parody of Florida Gov. Ron DeSantis’s glitch-filled presidential-campaign launch on Twitter Spaces. The fake video posted by Trump features DeSantis, Twitter Chief Executive Elon Musk, Democratic donor George Soros, former Vice President Dick Cheney, Adolf Hitler and the devil and appears to use AI-generated voice clones, including of Trump, who interjects to say, “Hold your horses, Elon, the real president is going to say a few words."
Spokespeople for the Trump campaign and the RNC didn’t return requests for comment.
The speed at which AI can generate content is seen as a game changer. Rather than having to rely on consultants and digital experts, AI is a far cheaper means through which campaigns can respond to events in real time.
Democratic and Republican consultants say they are also testing AI and the viral chatbot ChatGPT as digital organizing tools that can help draft speeches, fundraising emails and texts and build voter files. Although campaigns will still need to review and edit AI-generated content, the technology could help significantly reduce the amount of time spent on day-to-day voter contact.
Online watchdogs are warning that the technology could be used for more nefarious purposes, including to spread false information about polling hours and locations, voter-registration deadlines or how people can cast their ballots.
On the eve of the initial round of Chicago’s mayoral election in February, staff for candidate Paul Vallas noticed a video circulating on Twitter. It showed his photo and played a voice that sounded like his, appearing to condone police brutality, said Brian Towne, Vallas’s campaign manager.
The video didn’t circulate widely and likely didn’t affect the vote, Towne said. Vallas finished first in the February round but lost a runoff. Nevertheless, Towne described the episode as a dangerous precedent. He said that he didn’t know who created the video and that the campaign determined it was likely created using AI.
“For an informed voter, the video eventually comes across as fabricated," Towne said. “But you also have a lot of uninformed or disengaged voters that may view just a snippet of the video and suddenly they become more inclined to vote against a candidate."
Social-media platforms often have policies that state they will take down misleading, manipulated content. Enforcement of those policies can be inconsistent or slow, and platforms sometimes make exceptions for false posts by candidates in the name of allowing free political debate.
The rise of generative AI systems has prompted tech leaders to call for a new labeling system, allowing users to see whether a piece of content is AI-generated. The RNC’s video on Biden’s re-election came with a disclaimer in small white text stating “built entirely with AI imagery."
“It’s quite obvious that users should not be subjected to random disinformation without some knowledge of who did it and where it came from," said Eric Schmidt, former CEO of Google and head of a congressionally appointed commission on artificial intelligence.
Google and Microsoft, a backer of ChatGPT creator OpenAI, have both said they are launching tools that will mark AI-generated content with data about its origin.
In a preliminary step toward regulation, the White House recently asked for public input on AI issues, including how to address challenges to the electoral process.
Advocates say there are situations where existing laws could apply to the use of AI-generated content in elections. Federal election law bars candidates from impersonating other candidates. The advocacy group Public Citizen recently petitioned the Federal Election Commission, which enforces campaign laws, to issue guidance saying that provision would apply if one candidate used an AI-generated portrayal of a competing candidate in a campaign ad. The agency hasn’t responded to the petition.
The FEC has no authority to police average users of social media who might post a fabricated video that goes viral.
Robert Weissman, president of Public Citizen, said political parties and media outlets should declare the use of fraudulent media as “out of bounds." He added, “We are not actually prepared for the challenge."
Write to Sabrina Siddiqui at sabrina.siddiqui@wsj.com and Ryan Tracy at ryan.tracy@wsj.com