Many of these measures would aggressively regulate AI systems in unproven ways to unknown ends. The effect of those efforts would be for California to gain outsized influence on the development of an important sector, and interstate commerce more generally.
This puts California on a collision course with the Trump administration and many members of Congress who want to formulate a coherent federal policy framework to ensure that the U.S. remains at the forefront as the AI race with China intensifies.
The Trump administration recently released a major AI Action Plan explaining the president’s priorities and vision for AI. The plan stresses the need “to innovate faster and more comprehensively than our competitors in the development and distribution of new AI technology across every field, and dismantle unnecessary regulatory barriers that hinder the private sector in doing so.” To accomplish this, “[a] coordinated federal effort would be beneficial in establishing a dynamic, ‘try-first’ culture for AI across American industry.” In other words, the Trump administration’s vision for AI is rooted in a more market-oriented approach to AI policy aimed at boosting innovation and investment.
California’s new slate of AI bills moves in the opposite direction and threatens to encumber national markets with confusing new extraterritorial regulations that will undermine the goal of keeping the nation at the forefront of the computational revolution. The simple and necessary path forward is through Congress.
California’s European-Style Approach
Assembly Bill 1018 zeroes in on automated decision systems, which have become common among private and public actors for everything from hiring to risk assessment. Analysis of the bill by the California Senate Committee on Appropriations details why this proposal should raise alarm bells for those who seek a harmonized AI policy landscape. As explained in that review, the bill would impose “unknown, potentially significant costs” on local agencies, perhaps amounting to “hundreds of millions of dollars annually statewide.”
Already resource-strained public entities would need to spend major sums to develop “audits, training, staffing, disclosures, appeal processes, data retention [systems],” and several other bespoke compliance mechanisms. It’s unclear if those public actors have the institutional capacity to take on such technical and wide-reaching work. Delays in implementing the Colorado AI Act illustrate that well-intentioned state regulation that’s disconnected from state capacity is a recipe for dysfunction.
Although AB 1018 is probably the most far-reaching and potentially disruptive of California’s current crop of AI bills, other measures look to regulate algorithmic systems in equally burdensome ways. If enacted, Senate Bill 53 would require labs that “harness an extraordinarily high amount of compute power [to] create, implement, and publish both a safety and security protocol and a transparency report for each released model.” Though its proponents claim it's a "light touch" bill, the Chamber of Progress, a tech industry group, warns that it suffers from scope creep. In practice, it would not just regulate "frontier models," but rather apply to all "large developers." Who qualifies as a large developer? Not clear! The attorney general will figure it out down the road. This lack of clarity is common across similar proposals and exposes the ambiguity that may permeate the AI regulatory landscape if California and other states insist on rushing forward with their own proposals.
California is essentially borrowing the European Union’s technocratic, top-down regulatory system. The aggregate effect of this sweeping set of bills being passed would result in the micromanagement of every facet of emerging digital technology based on hypothetical future problems and seemingly no consideration of the forgone benefits. That sort of speculative, precautionary regulation has decimated innovation and investment in Europe and will do so here if California gets to dictate AI policy for the nation.
Even supposedly minor regulatory burdens can add up to innovation-quashing fees for smaller AI developers and deployers. Consider that just a slight change to a privacy policy can amount to $6,000 in legal fees, per Engine. That might not seem like much to tech behemoths, but it is a consequential sum for startups that have roughly$55,000 in operational resources available each month. If even a handful of California’s bills get through or a few states replicate those proposals, startups will have to spend dollars best suited to driving America’s economy forward on a seemingly endless compliance to-do list and regulatory paperwork requirements.
The 'California Effect' Has Come for AI
What all this regulation adds up to is the spread of the “California effect” to more markets. Because California is such a huge state, its regulatory approach can quickly spread to other states. Lawyers at Holland & Knight recently explained how the sector would suffer if the California effect grows in AI markets:
“[M]anufacturers are likely to be guided by more [technologically] conservative jurisdictions, like a California, like the European Union, as they make decisions. And in the absence of an overarching federal policy, let's just call it a vision for uses of healthcare AI, you're going to see manufacturers likely to adopt some of these more conservative standards, which paradoxically, may throttle some of the innovation that the administration clearly hopes will occur here in the United States.”
This is why Congress must step up and act to protect interstate algorithmic commerce and national AI priorities.
OpenAI recently released an important letter to Gov. Newsom that outlines their strategy for better coordination of AI safety standards. Rather than tasking AI labs with adhering to manifold bespoke arrangements, OpenAI envisions companies adhering to national and global guidelines on responsible development and deployment while Congress crystallizes a formal, uniform approach. In essence, they are trying to find a way to get buy-in from key states, starting with California, around the idea of letting the federal government establish safety and compliance ground rules for frontier models and then asking states to consider those federal standards to be the operational baseline for compliance with any other state AI safety mandates.
Congress will need to gofurtherthan this, but what OpenAI is outlining could be a first step in helping to ensure that the interstate marketplace and national interests are protected from a patchwork of state and local AI mandates. Federal lawmakers will need to not only establish baseline AI safety standards but also build mechanisms that keep innovation alive while ensuring accountability. One promising approach, currently being explored by Sen. Ted Cruz, R-Texas, would be the creation of regulatory sandboxes and safe harbor provisions for developers who commit to transparent reporting and auditing. By giving innovators room to experiment within clear guardrails, Congress can avoid freezing AI development in outdated compliance regimes while still safeguarding the public interest.
Equally important, Congress should recognize that heavy-handed compliance costs threaten to crush smaller players in the AI ecosystem. While tech giants can absorb the expense of complex audits and disclosure mandates, startups and open-source labs often operate on razor-thin margins. Federal lawmakers can address this imbalance by tailoring certain requirements to firm size and ensuring that new rules scale with the resources of those subject to them. Without such measures, America risks narrowing the AI market to only the largest incumbents — a dynamic that would harm both competition and innovation.
As President Trump warned in unveiling his AI Action Plan, “We need one common sense federal standard that supersedes all states,” or else “the most restrictive state will be the one that rules.” Unless Congress acts with urgency, the California effect will once again become America’s default, and the nation’s AI future will be dictated not by federal vision but by the preferences of one state legislature.
Kevin Frazier is the AI innovation and law fellow in the University of Texas School of Law. Adam Thierer is a senior fellow for technology and innovation at the R Street Institute.
Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.