Oct 31, 2024
U.S. Elections in the Age of AI: What to Expect
Artificial intelligence
The first U.S. elections since the rise of generative artificial intelligence are fast approaching. This technology, capable of autonomously creating convincing content, raises major concerns about its impact on democracy. What’s changing, you ask? Experts have long worried that the use of deepfakes could fuel a wave of disinformation, sow doubt, and influence the voting decisions of Americans.
Voters at the mercy of AI
How is generative AI influencing elections today?
While generative AI offers numerous opportunities to reach voters—such as targeted ads that make campaigns more efficient—it also introduces new challenges related to disinformation.
As expected, AI-generated deepfakes are proliferating as Republicans and Democrats battle it out.
Elon Musk recently came under fire for sharing a fake video to his 192 million followers on X about Democratic candidate Kamala Harris’s campaign. The Tesla, X, and SpaceX CEO is a known supporter of her opponent, Donald Trump. You can probably guess what followed…
In the video, AI voice cloning technology made Kamala Harris say things she never uttered—claims that President Joe Biden is senile, that she knows nothing about leading a country, and that she’s only running because she’s a woman of color.
The video raised concerns about AI’s potential to mislead voters as the elections approach, especially since there was no indication that it was fake. An uninformed citizen unfamiliar with the political battles could easily believe it and decide not to vote for Harris.
Donald Trump, too, has been the target of widely shared deepfakes. Fake images have shown the former president fleeing the police or being arrested. Even though these scenes never happened, it’s hard for the average citizen to distinguish fact from fiction—we rely on photos and videos daily to inform ourselves.
Worrisome risks
What are the risks associated with deepfakes and disinformation?
Experts warn that these technologies could be used to steer voters toward a particular candidate or away from one—or even to discourage them from voting altogether. Imagine how fake content could inflame tensions in a country like the U.S., where polarization is already so extreme!
The use of AI in elections raises important ethical questions. Is it acceptable to use deepfakes to influence voters? Could deceiving and manipulating the public in this way harm democracy itself?
Regulations and the future
As we discussed in last month’s article, there are no specific laws in the U.S. to regulate such uses of AI. However, certain social media policies require that AI-generated content be clearly labeled to avoid confusion. These measures, however, are far from sufficient.
In light of the recent wave of disinformation and the upcoming election, tech giants like Google, Meta, and OpenAI have been urged once again to tighten security measures around generative AI.
As part of efforts to protect the integrity of the elections, a group of companies has signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. This agreement aims to enhance the prevention, detection, response, and identification of misleading electoral content. Signatories include not only the giants mentioned above but also TikTok, Amazon, X, IBM, Anthropic, and Microsoft. The accord also includes efforts to educate the public on how to protect themselves from deepfakes.
Will this initiative deliver tangible results? Will generative AI be used to promote democracy by engaging young voters and marginalized groups, or will it serve as a tool of manipulation in electoral campaigns? We’ll have to wait until November 5 to see how this controversy evolves.
In the meantime, double- or even triple-check any important information you come across online—fake content is already overwhelming us!