Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Deepfakes Dominate Early Attempts to Regulate AI

Colorado has passed the nation’s most ambitious AI regulatory law. In other states, lawmakers are regulating fake likenesses involving porn, politics and celebrities.

Colorado Gov. Jared Polis.
Despite some reservations, Colorado Gov. Jared Polis signed the nation’s most sweeping AI bill so far.
(Michael Ciaglo/Getty Images/TNS)
In Brief:
  • Hundreds of bills to regulate artificial intelligences have been proposed in states in 2024.
  • Colorado has posted the most sweeping attempt to protect individuals from systemic harm.
  • Most legislation focuses on deepfakes in political material and protecting people from sexual exploitation.

  • In a big move for AI legislation, Colorado closed out May by enacting a groundbreaking law around usage of artificial intelligence. This expansive law is one of the broadest pieces of AI legislation to pass amidst increasing concern about the technology and its uses.

    The bill is the most sweeping to pass a state this year. Although lawmakers across the country have been searching for ways to address AI, their efforts have largely been limited to regulating deepfakes — doctored video or audio that are convincingly close to the real thing — as used in political campaigns and pornography, or fake re-creations of celebrities.

    Colorado’s new law aims to protect consumer interactions from algorithmic discrimination. Placing the responsibility on developers behind “high-risk artificial intelligence systems,” the law requires risk management policies and programs, impact assessment and customer notification in the event that a high-risk system makes a consequential decision that impacts a consumer. This can be a system used in education, employment or immigration. This could impact AI developers and deployers across the private sector.

    Connecticut has a similar bill that passed the state Senate, but encountered criticism and failed to pass the House prior to adjournment. Democratic Gov. Ned Lamont has expressed concern that the bill might “stymie innovation.”

    States have been the primary drivers of AI legislation across the United States as progress on federal AI regulation has been slow. More than 100 AI bills have been introduced in California alone.

    So far in 2024, there have been hundreds of bills focusing on some aspect of artificial intelligence across 43 states and two territories. Most are still pending, with many of the bills that have passed focusing on smaller, more targeted areas where generative AI has influence or raises concern.

    Deepfakes remain the largest concern for states’ AI regulation. Sixteen states have passed regulations attempting to put guardrails around the use of deepfakes in political advertisements. In Arizona, Secretary of State Adrian Fontes used a deepfake video message to warn about the dangers of fabricated political content at the end of May. Shortly after, Democratic Gov. Katie Hobbs signed legislation mandating deepfake disclosures for political material.

    More than 20 states have prioritized regulating — in most cases, banning — the use of AI to create deepfake sexual content. Following a wave of teen girls targeted by their peers, legislators in dozens of states are focusing on updating existing laws around child sexual exploitation material to include generated imagery.

    Protecting Celebrities

    The first state to attempt to protect celebrities from unauthorized use of their image was Tennessee, with its ELVIS Act. The law, which goes into effect next month, strengthens existing rules about the unauthorized use of a person’s likeness by also including their voice under the list of things protected from unlicensed re-creation.

    “The ELVIS Act puts in critical safeguards to protect the humanity and artistic expression of Tennessee innovators and creators,” state House Republican Leader William Lamberth said in a statement. “While we support the responsible advancement of this technology, we must ensure we do not threaten the future livelihood of an entire industry.”

    At the federal level, legislation to regulate generative AI in the music and movie industries is only just getting off the ground. The bipartisan NO FAKES Act would prevent individuals and companies from using AI to create unauthorized digital replicas of celebrities’ likenesses.
    Zina Hutton is a staff writer for Governing. She has been a freelance culture writer, researcher and copywriter since 2015. In 2021, she started writing for Teen Vogue. Now, at Governing, Zina focuses on state and local finance, workforce, education and management and administration news.
    From Our Partners