Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Big Tech's New Rules of the Road for AI and Elections

Major tech firms have signed an accord to fight the deceptive use of AI in 2024 elections. It’s a welcome signal, if not a promise to solve the problem.

Social media should be accountable for 'deepfake' content, intelligence experts say
Mark Zuckerberg's company, Meta, is among the major tech firms that have signed a voluntary agreement to adopt “reasonable precautions” to keep artificial intelligence from disrupting elections.
(Carson Niall/PA Photos/TNS)
In Brief:
  • Legislators and election officials share deep concerns about the impact of misinformation created by AI technology.

  • A group of the world’s largest technology companies have signed a voluntary accord to address this problem.

  • Threats posed by AI-generated content can only be contained by the companies behind the tools used to create and distribute it. The impact of this voluntary commitment will be determined by the resources they devote to upholding it.


  • There's disagreement over many questions about elections, but it seems like partisans of all stripes are uneasy about the new kinds of disruption artificial intelligence could bring to upcoming elections. Since the start of the year, more than a dozen states have introduced bills to combat AI-generated threats such as deepfake videos, images or robocalls. But they have little control over the technology that makes any of these possible in the first place.

    Earlier this month, major tech companies including Amazon, Google, Meta, Microsoft, Adobe and TikTok signed off on “A Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” Given the fact that these firms create digital infrastructure that is used around the world, their pledge has ramifications for elections in which an estimated 4 billion voters will participate.

    The companies hope to limit the risks that come from deceptive content — from generation to unclear content origin to the dissemination of misinformation created using AI across platforms. No other sector has the collective resources to take this on. Unlike earlier technology breakthroughs, AI was developed by private companies.

    Social media is already awash with misinformation about candidates and elections. Can these companies do enough, fast enough, to limit confusion about the truth and provenance of communications created using AI?

    Doing Rather Than Talking


    No single company, even a technology giant, can control all the pieces of the technology infrastructure that come into play with AI-generated content.

    That's why it's significant that companies associated with AI-producing software, such as OpenAI, partnering with content platforms like Meta and TikTok, according to Rachel Orey, senior associate director at the Bipartisan Policy Center. This will set the stage for them to share what they learn about emerging or unexpected risks and how to mitigate them, as well as ways that AI itself is being used to circumvent safeguards.

    Noah Praetz, president of The Elections Group, agrees that cooperation between these companies may be the only path toward the positive outcomes election officials would hope to see. The baseline principles are lofty, but the story of how the parties adhere to them is yet to be told.

    Orey notes that the parties have only agreed to “seek" to detect and address deceptive content. “Seeking is one thing, doing is another,” she says. Achieving these goals will require devoting significant internal resources at a time when tech companies have been laying off trust and safety teams.

    The companies can ease this concern by being transparent with one another and the public about their work. “Will they report, on at least a quarterly basis, the progress they are making, what they are doing, how much they are investing?” says Lawrence Norden, senior director of elections and government at the Brennan Center for Justice, a think tank at NYU law school.
    BIZ-CPT-ALTMAN-AI-CHIPS-GET
    Members of the Georgia House Technology and Infrastructure Innovation Committee approved legislation to address the use of AI technology in elections.
    (TNS)

    Still Plenty of Misinformation


    AI may be bringing new dimensions to misinformation, but false or misleading campaign messages, video and photos are not new. The current information environment is already a “dumpster fire,” Praetz says. AI can act as an accelerant to this, but he’s not sure restricting its use will leave voters less angry or less confused.

    Even if the tech companies fully commit and execute on all of their promises, Norden says, this will slow the deceptive use of AI, rather than “save” the election from its impacts. For one thing, the accord does not mention unsecured, open source AI systems. “These tools can be used by bad actors to interfere in our elections,” says Norden. “The commitments by these companies cannot prevent that.”

    The agreement's definition of “deceptive AI election content” encompasses AI-generated audio, video and images. It does not mention AI-generated text, or AI tools that push messages into the information stream, already a major source of election misinformation.

    The most powerful strategy set forward in the accord might be “engaging in shared efforts to educate the public about media literacy best practices.” To the extent that this is done in a timely manner — and draws on the enormous communications power that resides in the parties — it could have a major influence on voters’ ability to detect and evaluate misinformation of all kinds.

    Tech companies have reasons that go beyond safeguarding democracy to build trust in AI. Some forecasters estimate the generative AI market could become a $1 trillion market over the next decade. It won’t help in terms of branding if high-profile controversies give the public the idea that it’s a tool for people with bad intentions, or create the impression that the companies that develop it are ambivalent about its misuse.

    Kent Walker, Google’s president of global affairs, made this point in an announcement about the accord. “We can't let digital abuse threaten AI's generational opportunity to improve our economies, create new jobs, and drive progress in health and science,” he said.
    Related Articles
    AI can generate vast numbers of public comments masquerading as citizen input. Other methods of public consultation can improve confidence in government.
    Carl Smith is a senior staff writer for Governing and covers a broad range of issues affecting states and localities. He can be reached at carl.smith@governing.com or on Twitter at @governingwriter.
    From Our Partners