Artificial Intelligence and the 2024 US Election: Myths and Realities
As the 2024 US presidential election unfolded, one question loomed large: Would artificial intelligence (AI) shape the outcome? This marked the first election in the era of widespread AI tools that allowed for the creation of synthetic media such as images, audio, and video, sometimes used for manipulation. Shortly before the election, a robocall was made in New Hampshire, featuring an AI-generated voice resembling President Joe Biden's, prompting the Federal Communications Commission to act swiftly, banning the use of AI-generated voices in robocalls.
This event served as a flashpoint in the ongoing debate over AI's potential to impact elections. Sixteen states passed legislation regulating AI’s use in political campaigns, often requiring clear disclaimers on AI-generated content close to Election Day. The Election Assistance Commission released a comprehensive “AI toolkit” for election officials, offering guidance on how to handle the challenges posed by AI-driven misinformation. Additionally, states set up resources to help voters distinguish between authentic and AI-generated content.
Experts had raised alarms about AI’s potential to produce deepfakes—videos and audios that could mislead voters by making political figures appear to say or do things they never did. Concerns extended beyond domestic issues, warning that foreign adversaries could exploit AI to influence public opinion. Despite these fears, the anticipated flood of AI-driven misinformation largely failed to materialize.
When Election Day came and went, misinformation remained a dominant issue, but it was largely based on old tactics. Claims about vote counting, mail-in ballots, and voting machines circulated widely, but the content was mostly created through traditional methods such as text-based posts and images taken out of context. “This was not ‘the AI election,’” said Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights. “Generative AI turned out not to be necessary to mislead voters.”
Professor Daniel Schiff of Purdue University echoed this sentiment, stating there was no "massive eleventh-hour campaign" that misled voters or influenced polling places. He noted that while misinformation existed, it was unlikely to have been a decisive factor in the presidential race.
AI-generated misinformation that did gain traction often supported existing narratives rather than creating entirely new falsehoods. For example, after false claims were made by former President Donald Trump and his running mate about Haitians allegedly eating pets in Springfield, Ohio, AI-generated images and memes spread across the internet, reinforcing the narrative without necessarily fabricating new information.
At the same time, efforts to curb the negative impact of AI on elections gained momentum. AI-driven risks prompted a collective response from governments, public advocates, and researchers. Schiff observed that the attention given to potential AI harms resulted in effective safeguards, helping to minimize the risks.
Social media platforms took action, too. Meta, which owns Facebook, Instagram, and Threads, required advertisers to disclose the use of AI in political advertisements, while TikTok introduced mechanisms to label AI-generated content. OpenAI, the company behind ChatGPT and DALL-E, banned the use of its tools in political campaigns, further limiting AI’s potential to influence the election.
Despite these safeguards, traditional techniques of misinformation still reigned supreme. Siwei Lyu, a professor of computer science and digital media forensics, explained that traditional methods of spreading falsehoods continued to be more effective than AI-generated media. Research also showed that AI-generated images didn’t achieve the same virality as traditional memes, even though both types could still gain traction.
In the end, prominent figures with large followings, such as Trump, spread misinformation without relying on AI-generated content. His false claims about illegal immigrants voting were amplified through speeches, media interviews, and social media posts, helping to shape public opinion despite the lack of AI-driven influence.
While the role of AI in the 2024 election may not have been as significant as some predicted, the ongoing battle against misinformation remains central. The election highlighted the complex interplay between technology, policy, and public perception, underscoring the need for vigilance in managing the risks posed by new technologies in the political landscape.
Lire aussi
Latest News
- Ayer 16:40 Morocco’s 2024: Diplomatic triumphs, cultural pride, and resilient progress
- Ayer 16:30 A Call for Unity and Hope: Pope Francis' Christmas Message
- Ayer 16:20 Morocco Fortifies Security for 2025 AFCON and 2030 World Cup
- Ayer 16:02 Trump's Cabinet Picks Face Uncertainty as Senate Prepares for Confirmation Battles
- Ayer 15:43 Morocco's Police Force Enhances Professionalism and Community Engagement in 2024
- Ayer 15:41 Colombia’s Battle for Sovereignty: Renegotiating Trade Deals to Protect National Interests
- Ayer 15:10 Morocco Strengthens Global Security Alliances in 2024