States such as Maryland have written laws to protect kids from manipulate AI-designed content online. In North Dakota, Wisconsin, Alabama and other states, legislatures have acted to safeguard against misleading voters with AI-generated content. Laws in Kansas, Montana and elsewhere protect critical infrastructure and national and cybersecurity interests.
The states are leading the way because our constituents have asked us to guard against the worst abuses and threats of AI. Each of these laws — and hundreds more like them — were passed by legislatures on a broadly bipartisan basis. Yet each is now under threat of invalidation by a little-noticed provision in the reconciliation bill passed by the U.S. House.
As leaders of the National Conference of State Legislatures’ Task Force on Artificial Intelligence, Cybersecurity and Privacy, we have spent the last several years working with colleagues nationwide on the leading edge of the AI revolution. We have collaborated with industry to understand how these technologies work, their potential to improve our lives, and how malicious actors could leverage them for nefarious aims. We have used this knowledge to develop targeted, thoughtful, bipartisan legislation that protects our constituents.
Don’t take legislation as anti-innovation. Across the nation, and across the aisle, state lawmakers are enthusiastic about AI’s potential and want the United States to lead the world in this technology. We believe AI can unlock new economic growth, akin to the advent of the internal combustion engine and the telephone. But even as new technologies bring opportunities, it is our duty as policymakers to think about potential downsides and address them appropriately.
The pre-emption provision in the reconciliation bill bluntly says that no state may “enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for the next decade. If enacted, years of careful work by states to adopt laws that strike a balance between fostering innovation and safeguarding consumers would be tossed out the window, with no federal laws governing AI in their place.
Our nation’s law enforcement officials agree. A bipartisan group of 40 state attorneys general recently stated: “The impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI. This bill will affect hundreds of existing and pending state laws passed and considered by both Republican and Democratic state legislatures. Some existing laws have been on the books for many years.”
We get why some in the AI industry may support pre-empting state lawmaking. They might argue that innovation would suffer under a patchwork of regulations that don’t perfectly line up. We’ve heard that argument from other industries through the years. We’re not buying it.

RJ Sangosti/TNS
Under our constitutional framework, states are not only permitted but obligated to act in the absence of federal action. We have both the right and responsibility to protect our constituents through laws that reflect distinct local contexts and challenges. A uniform approach must be earned through collaborative policymaking — not simply imposed through a draconian decadelong freeze that locks states out.
As the Senate takes up the reconciliation package, we urge them to remove the pre-emption provision and allow state legislatures to continue leading the way as we embrace AI’s potential while guarding against its risks.
Thomas C. Alexander is the president of the South Carolina Senate. Jacqui Irwin is a member of the California Assembly. They serve as co-chairs of the National Conference of State Legislatures’ Task Force on Artificial Intelligence, Cybersecurity and Privacy.
Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.