As the 2024 U.S. presidential election approaches, the use of AI technology in political campaigns is becoming more widespread, with some campaigns utilizing false AI-generated images, videos, and texts to mislead voters, deepen bias, and undermine fair competition. Currently, the U.S. lacks effective laws and regulations to address this challenge. The gap in campaign rules has created huge variables in the presidential election, which concerns US and even global politics.
Experts worry that the technology could accelerate the erosion of trust in media, government and society. A fake video, an email filled with false stories, or a faked picture of a decaying cityscape could widen partisan divisions by showing voters what they expect to see. People could fall deeper into a polarized information bubble, trusting only the sources they choose to believe.
How AI could impact the 2024 U.S. election
The American Association of Political Consultants recently condemned the use of deeply falsified content in political campaigns as a violation of ethics. Larry Huynh, the group's president, said, "People can't help but push the limits to see how far they can take things. Like any tool, they can be used for undesirable purposes or behaviors to deceive voters, to mislead voters, to convince voters of things that don't exist."
"If someone can make noise, create uncertainty or create a false narrative, that could be an effective way to influence voters and win a campaign." Darrell M. West, a senior fellow at the Brookings Institution, wrote in a report this past May that "since the 2024 presidential election could hinge on tens of thousands of voters in a handful of states, anything that can tilt people in one direction or the other could end up being the deciding factor."
The report, titled "How Artificial Intelligence Will Change the 2024 Election," raises 3 questions.
First, politicians could use generative AI to respond immediately to campaign developments. In the coming year, response times could be reduced to minutes instead of hours or days.AI could scan the internet, think about strategy, and make a strong appeal, which could be a speech, press release, picture, joke, or video touting the benefits of one candidate over another.
Second, AI can target audiences very precisely. Candidates don't want to waste money on voters who are already for or against them, but instead want to target the few swing voters. Due to the high rate of political polarization in the United States, only a small percentage of voters have expressed indecision.
The Center for Public Impact released a report on how Cambridge Analytica data was used to send targeted ads based on the "personal psychology" of social media users during the 2016 US election. According to the report, "The problem with this approach is not the technology itself, but the covert nature of the campaign and the blatant dishonesty of its political message. Different voters receive different messages based on predictions of sensitivity to different arguments."
In addition, AI may democratize disinformation by providing tools for ordinary people interested in promoting their preferred candidate. One no longer needs to be a programmer or video professional to generate text, images, videos, or programs; anyone can become a political content creator and seek to sway voters or the media. New technologies also enable people to monetize discontent and make money from other people's fear, anxiety or anger.
AI technology is now much more powerful than ever before, and while not perfect, improvements are fast and easy to learn. In May, OpenAI CEO Sam Altman told a Senate subcommittee at a hearing that he was very worried about the 2024 presidential election, and that the technology's ability to "manipulate, persuade, and provide a kind of one-on-one interactive disinformation" was "an important area of concern."
Pushing for a new "guardrail"
However, as increasingly sophisticated AI-generated content appears frequently on social networks, most of these social networking platforms are unwilling or unable to regulate it. Ben Colman, CEO of Reality Defender, a company that provides AI-generated content detection services, said the regulatory gap has allowed unlabeled AI-generated content to cause "irreversible damage" before it can be addressed.
"For the millions of users who have already seen and shared fake content, explaining that it's fake after the fact is not only too late but has little effect." Coleman added.
Many political consultants, election researchers, and legislators say creating new guardrails is a top priority, such as laws to regulate synthetic ads. Existing precautions, such as social media rules and services claiming to detect AI content, have not been effective in stemming the tide.