This is their second attempt. A similar provision in the GOP’s reconciliation package collapsed this summer after states’-rights conservatives and Democrats objected. Now the GOP is trying again, by slipping a nationwide moratorium into the National Defense Authorization Act. Shutting down our laboratories of democracy wouldn’t just undermine local autonomy — it would cripple the nation’s ability to develop evidence-driven approaches to one of the most consequential technologies of our time.
The persistence of this effort makes one thing unmistakable: Federal pre-emption of state AI laws has become a priority for President Donald Trump and Republican leaders, who argue that a “patchwork” of state rules threatens innovation and U.S. competitiveness with China. That logic assumes that uniformity produces better policy, when the opposite is often true.
The proposed moratorium rests on the notion that innovation thrives only under a single national standard and that differing state approaches represent inefficiency rather than insight. But effective regulation begins with iteration, trial and localized problem-solving. States legislate first, learn what succeeds and fails, and generate the evidence Congress can later use to build durable national frameworks. State diversity — economic, demographic and political — is not a hurdle to innovation; it is the engine that drives it.
We’re already seeing that engine at work. Tennessee, anchored by the music industry, moved first to protect artists from AI-generated impersonation. Montana, focused on critical infrastructure, enacted rules requiring risk assessment for AI-controlled systems. Automated tools have been implicated in questionable insurance denials and wrongful benefit cuts; Pennsylvania created an appeals process for AI-driven health-insurance denials, while other states empowered regulators to scrutinize automated decision-making.
We do not yet know which solutions work best — and that uncertainty is precisely why we need a multiplicity of approaches, not a nationwide freeze on state autonomy. Congress has debated AI for nearly a decade and passed just one federal AI law: the Take It Down Act, which targets non-consensual intimate imagery and followed similar protections already adopted in most states. Meanwhile, in this year’s legislative sessions 38 states have enacted around 100 AI-related measures. The action and the innovation are happening in statehouses, not in Washington.
Many congressional lawmakers, including moderate Democrats, support a strong federal AI standard that pre-empts inconsistent state laws — but only if that framework is real, robust and enforceable, not a placeholder used to justify a decade of paralysis. Some conservative Republicans share that view. Translation: There is no coalition for “no rules.” A Gallup poll shows that 80 percent of voters support rules for AI safety and data security, even if these rules slow down the rate of technology development. The only viable path is a national standard built on the foundation states have already laid — not a federal decree that freezes progress where it stands.
Freezing state action without federal rules would leave Americans exposed. Without state protections, there would be no clarity on whether companies running critical infrastructure are using AI in ways that increase cyber vulnerabilities, no meaningful oversight of AI-driven health insurance denials, and no limits on governments deploying untested automated systems to accuse residents of wrongdoing. Teenagers would continue to encounter mental health chatbots that describe suicide plans. The only guaranteed federal requirement would be the removal of non-consensual intimate imagery — nothing more. Everything else would depend on industry self-regulation. That is not innovation; it is a windfall for Big Tech.
The claim that state-level AI rules hinder startups or help China is equally flimsy. The real barrier to innovation is the oligopoly already dominating the AI industry. A handful of giants control the computers, data and capital required to compete. Without guardrails, they can release systems that hallucinate, discriminate or fail without consequences. Smart regulation levels the playing field and rewards companies that build safe, reliable technology.
The defense-bill attempt may fail, but the broader effort to freeze state AI laws is not going away. Congress should reject any attempt to shut down the laboratories of democracy. The fastest path to effective national AI governance is the one that has worked for every previous technology: Let states experiment, let them learn, and then build federal standards from what works.
Elaine Chang is director of technological innovation for justice at the Institute on Race, Power and Political Economy. Prior to joining the Institute, she spent a decade in Silicon Valley building products that apply machine learning and AI to social problems.
Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.
Related Articles