Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

States Plan to Continue Regulating AI, Despite Trump’s Order

The order calls for suing and denying grants to states with “onerous and excessive” artificial intelligence regulations, and for recommending a “minimally burdensome” national standard to pre-empt state laws.

US-NEWS-TRUMP-AI-REGULATIONS-GET
President Donald Trump displays a signed executive order as Sen. Ted Cruz (R-TX), second from left, Commerce Secretary Howard Lutnick and White House artificial intelligence and crypto czar David Sacks look on in the Oval Office of the White House on Thursday, Dec. 11, 2025, in Washington, D.C.
(Alex Wong/TNS)
In Brief:

  • The president’s executive order is intended to pave the way for more AI innovation by combating “burdensome” state regulations and calling for a single regulatory environment in place of a state-by-state patchwork.
  • But states say their laws are important to safeguard their residents against a quickly developing technology.
  • The new executive order calls for the executive branch to work with Congress to create a federal AI policy, but questions remain about whether Congress will act, and what such a policy could look like.


After President Donald Trump last week signed an executive order targeting state AI laws, officials in several states have said they plan to stay the course and regulate the new technology as they see fit.

Last week’s executive order promises legal and financial repercussions for states with “onerous and excessive” regulations, and calls for Congress to create a light-touch federal AI policy that pre-empts stronger state laws. Much remains uncertain — including which laws will be targeted and whether Congress will actually take up Trump’s call to pass a law.

AI regulations have been a hot area for states, as the technology evolves and spreads quickly. In 2025 alone 46 states passed 159 AI laws. These laws run the gamut; they’re aimed at everything from regulating AI chatbots to requiring disclosure of AI use in political ads, and more.

The federal government, in turn, has repeatedly considered halting state laws. Congress considered then rejected adding a 10-year moratorium on state AI laws to the One Big Beautiful Bill Act, and recently opted against adding an AI pre-emption provision to the National Defense Authorization Act (NDAA). The White House’s July AI Action Plan called for finding ways to withhold funding from states with burdensome AI regulations.

Last week’s executive order says today’s state-by-state approach to regulation leads to a difficult regulatory landscape for companies. It also asserts that some state AI laws are infringing on interstate commerce or “requiring entities to embed ideological bias within models.”

“We remain in the earliest days of this technological revolution and are in a race with adversaries for supremacy within it,” the order says. “To win, United States AI companies must be free to innovate without cumbersome regulation.”

Unlike earlier attempts to simply stop states from enforcing AI laws, this executive order intends to come up with some sort of federal policy to replace them.

Questions of Authority


Some experts believe state AI laws have indeed gone too far.

Regulations that affect how companies train, evaluate and deploy their AI models essentially set the bar for the entire country, because companies don’t have the resources to tailor their products to each and every state, says Kevin Frazier, AI Innovation and Law Fellow at the University of Texas at Austin School of Law. The executive order reminds states that they can pass AI laws, so long as they stay within the appropriate bounds, Frazier says.

“While it may be frustrating for many Americans that Congress has yet to dictate a clear national framework for governing AI, the fact of the matter is that no state has the authority to regulate as if they’re stepping into the shoes of Congress,” Frazier says.

But Colorado state Rep. Brianna Titone says it’s the executive order that violates the separation of powers. Only Congress, not the president, has the right to pre-empt state laws.

“States have rights to put policies in place that we feel are in the best interest of our constituents. The Constitution allows that … no executive order can legally stop us from doing that, and it will be challenged in court,” Titone says.

The National Conference of State Legislatures lodged its own stance against federal pre-emption in November, stating that it “strongly opposes any effort to override state-level artificial intelligence laws whether through executive action or legislation.” Thirty-six attorneys general previously sent a letter to Congress protesting against including a pre-emption measure in the NDAA, as did 280 state lawmakers.

States Keep Regulating


California Gov. Gavin Newsom, a Democrat, called the executive order a “con” run by Trump and his federal AI adviser, saying on X that the order “does little to protect innovation or the interests of Americans. California will continue building a nation-leading innovation economy while implementing common sense safeguards.”

New York state Assemblymember Alex Bores responded to announcements that Trump planned to sign the executive order by saying that this shows that “Big Tech billionaires are the ones who are really in charge,” and that “states like New York must fight back to create a future that works for everyone.”

Bores co-sponsored a state policy that requires creators of powerful AI models to make safety plans and report major security incidents. An AI company-backed super PAC is working to unseat Bores in next year’s midterms.

Florida Gov. Ron DeSantis, a Republican, has introduced his own slate of AI safety regulations and said he believes these aren’t the kind of laws targeted by the executive order. Even if the federal government brought a legal challenge, he believes the laws would survive.

“Even reading it very broadly, I think the stuff we’re doing is going to be very consistent [with the executive order],” DeSantis said. “But irrespective, clearly, we have a right to do this.”

Spotlight on Colorado


The executive order explicitly takes aim at the to-be-implemented Colorado AI Act. This policy aims to protect consumers from discrimination when AI systems are used to make decisions about their access to health care, employment and other important areas.

Trump’s order highlights the law as an example of something that could “embed ideological bias” in AI models, in this case by potentially causing AI systems “to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” In a July executive order, Trump claimed that AI designed with diversity, equity and inclusion principles in mind skew outcomes and engage in “the suppression or distortion of factual information about race or sex,” such as showing an image of a historical figure where the race or sex is inaccurate. But bill sponsor Titone says AI systems have long been known to hallucinate regardless of developers’ efforts to prevent discrimination.

“You already have a lot of hallucinations happening, and that’s the whole reason why that discrimination is likely occurring to begin with,” Titone says.

Could the Order Help or Hurt Innovation?


Some industry voices like the Security Industry Association — a global trade group for cybersecurity companies and other security solutions providers — praised the order.

“It is important to ensure that we don’t stifle innovation by forcing businesses small and large to navigate a hodgepodge of inconsistent state measures like we have seen proposed,” said Jake Parker, senior director of government relations, in an emailed statement. “When it comes to potentially harmful uses of AI, any new laws should be narrowly tailored to address specific use cases; however, overly broad state measures could alter the consistency and predictability of the U.S. regulatory environment.”

Some, however, say the executive order could backfire. Existing research suggests that “what really harms innovation is regulatory uncertainty, rather than more complicated compliance regimes,” says Scott Babwah Brennen, director of New York University’s Center on Technology Policy. The order creates plenty of uncertainty, including around which state laws will face and withstand legal challenge and whether the executive order itself will withstand legal challenge.

It also remains to be seen what the recommended federal framework will be, and how Congress will respond.

The executive order gives a few guideposts about what Trump would like to see. For one, the order suggest that Congress’ eventual federal AI policy should not pre-empt certain kinds of state laws: those regarding child safety protections, data center infrastructure, and state government use and procurement of AI. Secondly, the framework should “ensure that children are protected, censorship is prevented, copyrights are respected and communities are safeguarded.”

Reaching unified, federal laws on AI would make a lot of sense, Babwah Brennen says, but “pre-empting state laws and saying, we’re going to do something at some point, that we don’t know when that is or what it will look like — that is more concerning.”

Jule Pattison-Gordon is a senior staff writer for Governing. Jule previously wrote for Government Technology, PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon.