In Brief:
- AI capabilities are exponentially greater than in past elections.
- This may not mean new kinds of risk to elections, but the ready availability of more powerful AI tools means the volume will increase.
- Training is available to help election offices prepare, and to help them better implement AI for their benefit.
On his personal blog, OpenAI CEO Sam Altman mused that ChatGPT might already be “more powerful than any human who has ever lived.” Cause for celebration in some contexts, perhaps — but not if those hoping to disrupt electoral processes have ready access to such tools.
Researchers say AI capabilities have doubled every seven months since 2019. By this metric, AI is more 675 times “better” today than it was during the run-up to the 2020 presidential election.
This doesn’t necessarily mean new ways to interfere with elections, says Mike Moser, a consultant for the Election Security Exchange. The risks aren’t new, he says, but AI advancement, particularly the development of large language models, exacerbates them.
Moser describes it as the “industrialization” of malicious actors. “It lowers the barriers for those that don’t have the scale or the sophistication to do their own coding,” he says, “Or it helps them do faster reconnaissance and research.”
A study by Stanford researchers found that messages on policy issues generated by AI were seen as “more logical, better informed and less angry” than those authored by humans. Izzy Gainsburg, associate director of Stanford’s AI for Public Benefit Lab, sees a bigger long-term concern.
What happens, he asks, in a world where the landscape is so littered with deepfake videos and articles that people don’t know what information to trust? “That seems like a scarier, more macro influence of AI on democracy,” Gainsburg says.
What should election officials be wary of at this stage, and are there ways AI could help them?
(CivAI)
AI Agents Don’t Sleep
One big change since the 2016 election and the well-documented interference attempts by Russian cyber actors is the proliferation of AI “agents.” Powered by large language models, these software programs can perform tasks autonomously, working around the clock with no need for human input.
This takes automation far beyond asking a chatbot to write a post or even code. Agents have become so common that a social media platform, Moltbook, was created for them. It has been flooded with agents writing posts and responding to each other, Gainsburg says, some “talking” about what it’s like to be an agent.
A bad actor could have 1,000 agents working for them, creating content, making posts, scraping the web for social media profiles and writing personalized messages. It would offer a startlingly effective new way for foreign actors to wage a public opinion campaign to influence the outcome of an American election.
Even if such content is never directly viewed by humans, it becomes part of the data a chatbot will scour when asked a question about an election or candidate. “That provides a back door to people’s brains,” says Siddharth Hiregowdara, who co-founded CivAI, a nonprofit working to improve understanding of AI capabilities and dangers.
As an example, he points to Pravda Australia, a website that is one of nearly 200 in a Russian influence network that has been nicknamed “Portal Kombat.” It’s aimed at AI chatbots, not people, drawing on existing content from sources such as Russian propaganda and the messaging app Telegram to populate itself with thousands of stories.
Australian security experts became aware of the site during the 2025 election season. The content it posted didn’t seem to have any short-term impact or reach humans in great numbers, but they remained concerned about the potential long-term effect of this much misinformation sitting on the Internet alongside real news.
Autonomous AI tools could create bomb threats on a mass scale, says Tina Barton, vice president of Ready for Tuesday. Election officials are emphasizing transparency and posting more elections information online than ever. “It’s kind of a challenge,” Barton says. “If you put everything out there, it [AI] can easily reach out there and grab that stuff and use it against you.”
AI agents can also significantly magnify denial-of-service attacks, a type of cyber attack that causes websites to crash by sending excessive traffic to them. This can take vital election information offline at critical times and undermine confidence in election office websites.
What’s New Is Also Old
Last year was the “year of AI video,” Hiregowdara says, due to the release of generative AI video models from OpenAI and Google. “I wouldn’t say it’s been catastrophic from a fake news perspective,” he says. But as Gainsburg noted, the more “believable” AI video hits communication channels, the more people become unable to tell what’s real and what’s not.
This problem isn’t strictly AI-related. Many of the things AI has supercharged have been possible with computers for a long time, says TJ Pyche, a former county election administrator who works with Barton at Ready for Tuesday. Some of the most disruptive video to hit social media during the 2020 election was footage of routine election activity weaponized using false characterizations. Video footage of real people can be edited to be misleading.
Moreover, falsehoods have been a feature of politicking for centuries. As much as social media channels have amplified election misinformation, real-world statements by politicians and influencers have their own power.
Election offices have been at the intersection of cybersecurity, information security and physical security for a decade, in ways that other parts of government have not.
While this may have helped harden them against AI-enabled threats, there is more to do. A recent survey of election officials by the Brennan Center found the great majority share concerns that AI could make their jobs more difficult or dangerous. At present 16 percent of election offices are using it; nearly half would like implementation guidance.
Pyche is working with a new AI and Elections Clinic at Arizona State University (ASU) to provide training and resources that help election officials understand the positive and negative potential of this technology.
(ASU)
AI for Good
The director of the clinic is Bill Gates, a former member of the Maricopa County Board of Supervisors (not to be confused with the co-founder of Microsoft). Gates weathered the worst of the controversies around the 2020 election, including threats against his life and family. He welcomes the chance to give colleagues new defenses against such events, he tells Governing.
The clinic’s skills hub offers boot camps, case studies, an AI and elections media library, and a Substack offering news and research updates.
These materials include training and information on how AI could positively contribute to election workers’ jobs. It could be used for a first draft of an election plan, or a summary of how processes might need to change as a consequence of new legislation, Gates says. Pyche offers other examples. A Virginia official used AI to predict turnout for a primary election to inform resourcing, for example, and a registrar in Connecticut uploaded his poll worker training manual and Secretary of State guidance into a Google app and made an interactive resource by adding a chatbot.
“AI is good at taking a ton of information, summarizing it quickly, being able to bring up meaningful patterns, the types of things many of us leave behind after we graduate from high school or college,” Pyche says.
The skills hub also includes a curated collection of AI prompts for election offices, covering topics including communications and press engagement, training and security. It’s a given that a human always needs to be in the loop, Gates says. The biggest “bang for the buck” from these is likely to be in small jurisdictions, he believes, where a handful of people run everything.
Pyche has helped lead training at ASU boot camps as well as around the country from his position at Ready for Tuesday. These often incorporate material developed by CivAI, Hiregowdara’s nonprofit.
CivAI’s interactive resources include demonstrations of changing a chatbot’s personality and generating disruptive election day tweets. A deepfake sandbox and an exploration of bias baked into AI platforms are also available. Materials from the Election Security Exchange include a guide to essentials of managing AI risk.
“My prediction is that we’re not going to see catastrophic AI involvement in the 2026 midterms,” Hiregowdara says. “But it’s still a matter of what are the harbingers? What are the signals?”