Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Why the Feds — Not the States — Should Take the Lead on Regulating AI

Conflicting mandates chill innovation and create a compliance nightmare while putting national security at risk. A federal moratorium on state regulation would be a good step toward developing a coherent national strategy.

AI code
Adobe Stock
The transition from the Articles of Confederation to the Constitution turned on a simple idea: Some matters require the resources, uniformity and coordination only made possible by a strong central government. How to precisely draw the line between the issues best left to states and those deserving of federal attention has flummoxed generations of policymakers, jurists and scholars.

When it comes to artificial intelligence regulation, however, the answer is easy: The economic and national security implications of AI development place it squarely in the domain of the federal government pursuant to the commerce clause. That’s precisely why Congress has the authority to move forward with a proposed moratorium on state regulation of AI.

A bill working its way through the House includes a prohibition on states enforcing a broad range of AI regulations for 10 years. Opponents insist that this proposal would amount to the federal government invading the clear authority of the states. This argument rests on shaky legal ground. Though the founders and the Supreme Court expected and intended for states to regulate matters of local concern, that power has explicit limitations. Alexander Hamilton, for example, argued in Federalist No. 11 for a “vigorous national government” with broad authorities over commerce rather than leaving economic matters to the states, which would render the economy susceptible to manipulation from foreign countries and the “little arts of the little politicians to control or vary the irresistible and unchangeable course of nature.”

In more practical terms, the Supreme Court recognized in Pike v. Bruce Church in 1970 that even state statutes that regulate “evenhandedly to effectuate a legitimate local public interest” may run afoul of the commerce clause. The court has also enforced the so-called extraterritoriality principle, which “protects against inconsistent legislation arising from the projection of one state regulatory regime into the jurisdiction of another State.”

Fast-forward to today’s debate over AI. The hundreds of bills pending before state legislatures are characterized by the very concerns raised by Hamilton and addressed by the court. First, regulations supposedly intended to address local concerns over AI offer far too few benefits in light of the potential burdens on the AI economic ecosystem. The RAISE Act, pending in the New York Legislature, for instance, would require labs working on leading AI models to comply with vague obligations as well as submit to frequent audits.

These regulatory hurdles aim to diminish “critical harms” that may occur across the nation, rather than addressing explicit and local concerns. State legislation also risks entrenching significant inconsistencies in the law. They vary in definitions, in expectations and in timelines.

In many cases, such proposals address harms already covered by existing laws and fail to consider viable alternatives. Legislation addressing the harms posed by algorithmic pricing, for instance, may needlessly duplicate protections afforded to residents by broad state consumer protection laws.

Imagine a scenario where an AI developer in Texas must navigate a different set of rules for deploying a new application in California, another in Florida and yet another in New York. In the absence of an extensive AI moratorium, this inevitable patchwork of regulations would create an untenable compliance nightmare, particularly for smaller innovators and startups that lack the vast legal resources of tech giants.

The end result isn't better protection for citizens; it's a chilling effect on innovation. Resources that could be poured into research and development, into creating safer and more beneficial AI, would instead be diverted to deciphering and adhering to a labyrinth of conflicting state mandates. This directly impedes interstate commerce, the very thing the commerce clause was designed to protect and promote.

Furthermore, the development and deployment of advanced AI is inextricably linked to national security. AI is not merely another consumer product; it is a foundational technology with profound implications for defense, intelligence and international competitiveness. A fragmented regulatory approach, where each state charts its own course, risks creating vulnerabilities that could be exploited by global adversaries.

If the U.S. is to maintain its leadership in AI and ensure that this powerful technology is developed and used responsibly, a coherent national strategy is paramount. A moratorium on state-level regulation would provide the necessary breathing room for federal agencies, in consultation with experts, industry and civil society, to develop a comprehensive framework.

Kevin Frazier is the AI Innovation and Law Fellow in the University of Texas at Austin's School of Law.



Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.