The question of whether the federal government should block state AI laws rose again and again in 2025. Finally, President Donald Trump pushed the issue and signed an executive order late last year threatening legal and financial pressures against states with overly “burdensome” AI regulations.
While many states questioned whether and how the order would affect them, others had no such confusion. The draft version of the order specifically singled out California’s Transparency in Frontier Artificial Intelligence Act, also known as SB 53, which went into effect in January 2026.
So what’s all the fuss about?
California’s law aims to compel developers of powerful, general-purpose AI models to adopt safeguards meant to prevent the technology from causing large-scale property damage, bodily injury or death.
Proponents say the law offers essential guardrails on a quickly developing technology. But detractors say it makes companies jump through a number of burdensome and unnecessary hoops — and that the law is not fair to all
companies. President Trump’s executive order said SB 53 is the kind of law that “threaten[s] to undermine the innovative culture” that American AI companies need.
California laws are especially important to the country’s overall AI policy picture, given that the state is home to 32 of the world’s top 50 AI companies. It has also generated some copycat policies in other states, including in New York.
SB 53 focuses on companies that make “frontier” AI models, or generative AI tools that are adaptable, can be used for a wide variety of tasks and are often trained on large amounts of data.
While some state laws seek to prevent algorithmic discrimination or malicious or abusive deepfakes, California’s law focuses on what it calls “catastrophic risk.” That’s the likelihood that AI models are used in ways that “materially contribut[e]” to at least 50 people dying or being seriously injured or to more than $1 billion worth of property damage.
Specifically, SB 53 is concerned about AI being used to help create or release biological, chemical, nuclear or radiological weapons or an AI model acting largely on its own, with little human oversight, to conduct cyber attacks, murder, assault, extortion or theft. The law is also concerned about AI models evading the control of developers or users.
Critics contend that these eventualities are vanishingly unlikely. Trump’s draft executive order says SB 53 imposes “complex and burdensome requirements” all “premised on the purely speculative suspicion that AI might ‘pose significant catastrophic risk.’”
But supporters of the law say there’s reason to believe these kinds of guardrails may be necessary sooner than we think.
Anthropic, the public benefits corporation behind the Claude large language models, detected a new kind of AI-enabled attack last fall. A nation-state-backed hacking group was using one of the company’s AI tools to conduct cyber attacks, something Anthropic said was “unprecedented.” Those attacks targeted government agencies, financial institutions, chemical manufacturing companies and technology companies.
SB 53 is Sen. Scott Wiener’s second pass at the issue, after Gov. Gavin Newsom vetoed a 2024 attempt on the grounds that that earlier bill struck the wrong balance. But Newsom’s veto letter agreed the state needs to proactively create guardrails to prevent potential catastrophe. On the same day that
Newsom vetoed the bill, he commissioned AI experts to create a report that could help policymakers craft frontier AI model regulations. That report, released in 2025, influenced SB 53’s approach.
If there’s enough evidence to reasonably predict certain harms could happen, policymakers shouldn’t wait to see harms occur in the real world before regulating to prevent them, the report advised: “We do not need to observe a nuclear weapon explode to predict reliably that it could and would cause extensive harm.”
SB 53 largely focuses on companies with sizable revenue. Under the law, large AI developers must create (and follow) protocols for assessing catastrophic risks, and then act to mitigate the risks they find. They must also publicly document their protocols and publish various other details, like information about the data they used to train their models and how the company responds to critical safety incidents.
Large AI companies must also inform the state if an internal use of their models causes “catastrophic risks.” Companies cannot penalize whistleblowers for alerting the company or the government about AI developer activities that seriously jeopardize public health or safety.
Furthermore, any frontier AI developers — even small ones — that release new frontier models must publish information about what the model is meant to be used for.
SB 53’s text explains that policymakers wanted to “avoid burdening smaller companies” and that additional laws can be passed if smaller players start making AI models that pose serious risks. Anthropic praised that choice and publicly endorsed SB 53.
Two technology industry associations and the California Chamber of Commerce, however, criticized the decision in a September 2025 joint letter opposing the bill. The small Chinese company DeepSeek has already created a “hugely influential and potentially risky” AI model, they say. Meanwhile, some large companies might only be exploring AI in basic ways that aren’t particularly risky. The Newsom-commissioned report, while it doesn’t endorse any particular legislation, notes these same points.
“The act’s narrowly focused requirements on large developers may unintentionally hinder innovation, unfairly burden particular companies and fail to fully account for the broader ecosystem of AI model development,” Aodhan Downey, state policy manager for the Computer and Communications Industry Association, said in an emailed statement.
In contrast, Collin McCune, the head of government affairs at tech venture capital company Andreessen Horowitz, argues that the law’s choice to focus on how AI is developed actually hurts small companies, rather than unfairly spares them.
“Regulating how the technology is developed [is] a move that risks squeezing out startups, slowing innovation, and entrenching the biggest players,” McCune posted on X.
While detractors say the law is anti-innovation, proponents say it is light-touch.
“Rather than prescribe specific technical standards that companies must follow, SB 53 simply requires companies to be transparent about the approaches they are using,” a coalition of 23 supporting organizations wrote in a September joint letter. Co-signers range from a nonprofit focused on child well-being to an AI watchdog group. “SB 53 would put in place the basic protections that we failed to enact when social media was first released to the public. … If basic guardrails like this had existed at the inception of social media, our children could be living in a safer, healthier world.”
Anthropic, too, says the bill largely enforces good practices that are already commonplace: “By formalizing achievable transparency practices that responsible labs already voluntarily follow, the law ensures these commitments can’t be abandoned quietly later once models get more capable, or as competition intensifies.”
SB 53 went into effect in January 2026. Only the coming months will tell whether it helps, hurts or gets tangled up in a federal legal battle.