From fabricated images of Donald Trump's arrest to a video depicting a dystopian future under Joe Bidenthe 2024 race for the White House faces an avalanche of misinformation facilitated by technology in what is considered the first US election based on artificial intelligence.
Activists on both political sides are using advanced tools based on artificial intelligence, which many technology experts consider a double-edged sword.
AI tools can clone the voice of a political figure in an instant and create videos and texts as seemingly real that voters could have difficulty deciphering truth from fiction, while undermining confidence in the electoral process.
At the same time, campaigns are likely to use technology to increase operational efficiency acrossfrom analyzing voter databases to writing fundraising emails.
A video released in June by Florida Governor Ron DeSantis' presidential campaign purportedly showed former President Trump hugging Anthony Fauci, a leading member of the US coronavirus task force. AFP fact-checkers discovered that the video used AI-generated images.
After Biden formally announced his candidacy for re-electionin April The Republican Party released a video it said was an "AI-generated look at the country's possible future." if the Democrat won. In it realistic images of panic on Wall Street were shownChina invading Taiwan, waves of immigrants overwhelming border agents and a military takeover of San Francisco amid atrocious crime.
Other examples of campaign-related AI images include fake photos of Trump being detained by New York police officers and a video of Biden declaring a national military draft to support Ukraine's war effort against Russia.
"Generative AI threatens to fuel online disinformation campaigns," said the nonprofit organization Freedom House. in a recent report, which warned that this technology was already being used to defame electoral opponents in the United States.
"Disinformation purveyors are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern." More than 50% of Americans expect AI-generated falsehoods to influence the outcome of the 2024 election, according to a survey published in September by media group Axios and business intelligence firm Morning Consult.
About a third of Americans said they will trust results less because of AI, according to the survey.
In a hyperpolarized political environmentobservers warn that such sentiments risk fueling public anger against the electoral process, something similar to the January 6, 2021 assault on the US Capitol by Trump supporters over false accusations that The 2020 elections were stolen from him.
"Through (AI) templates that are easy and inexpensive to use, we will face a wild west of campaign claims and counterclaims, with a limited ability to distinguish false material from real and uncertainty about how these appeals will affect the election," said Darrell West of the Brookings Institution.
At the same time, Rapid advances in AI have made it a "revolutionary" resource for understanding voters and campaign trends at a “very granular level,” said Vance Reavie, CEO of Junction AI.
Previously, campaign staff used expensive consultants to develop outreach plans and spent hours writing speechesdiscussion topics and messages on social networks, but AI has made it possible to perform the same tasks in a fraction of that time, Reavie explained to AFP.
But what marks the potential for abuse is that when the AFP asked the AI-based ChatGPT to create a pro-Trump campaign newsletterfeeding it with the former president's false statements denied by United States fact-checkers, produced - in a matter of seconds - a clever campaign document with those falsehoods.
When the AFP asked the chatbot to make the bulletin more "aggressive", it spouted the same falsehoods in a more apocalyptic tone.
Authorities are rushing to establish safety barriers for AI, and several US states, such as Minnesotahave passed laws to criminalize forgeries intended to harm political candidates or influence elections.
On Monday, Biden issued an executive order to promote the "safe and reliable" use of artificial intelligence.
"Deepfakes use AI-generated audio and video to defame reputations, spread fake news, and commit fraud," Biden said at the signing of the executive order.
He expressed concern that scammers could take a three-second recording of someone's voice to generate an audio deepfake. "I've seen one of mine," she said. "I said to myself, 'When the hell did I say that?'"