Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Inside the Controversy Over Colorado’s AI Law

Negotiations over revising the first-in-the-nation law collapsed and now it won’t be enacted until at least June 2026.

Aerial view of House Majority Leader Monica Duran, left, and Sen. Anthony Hartsook, right, chatting on the floor of the Colorado House during the last day of the 2025 legislative session.
House Majority Leader Monica Duran, left, and Sen. Anthony Hartsook, right, chat on the floor of the House during the last day of the 2025 legislative session at the Colorado state Capitol in Denver on May 7, 2025.
Helen H. Richardson/TNS
In Brief:

  • Colorado’s law aims to prevent AI from being used as a tool of discrimination. It’s meant to apply to “high-risk” AI systems that make or help make consequential decisions.

  • From the get-go, the governor wanted legislators to update the law before implementation to avoid an overly burdensome regulatory framework. But attempts to revise it during an August special session fell apart. A major sticking point was how much liability AI developers and deployers should have if the AI’s decisions are discriminatory.

  • Unable to reach an agreement, lawmakers delayed the law’s implementation. They’ll try again during their regular session in January.



Colorado passed groundbreaking artificial intelligence legislation last year, but getting the law implemented has proved difficult.

The Colorado AI Act sets first-in-the-nation comprehensive consumer protections, intended to safeguard the public from being discriminated against when AI is used to make decisions about their health care, employment or other important areas. But the law hasn’t yet gone into effect — and its implementation date is getting further away.

Companies, and even Gov. Jared Polis, worried that the new law would be burdensome and put a damper on AI innovations. Polis had signed the original bill into law with the understanding that it would be revised before its 2026 implementation.

Polis said that the “law creates a complex compliance regime” for AI developers and deployers, in part because it holds companies responsible for unintentional discrimination, and that it might reduce innovation by making it harder to do business in Colorado than in other states.

Polis called a special session to amend the law in August and lawmakers hurried to craft a revision that would please everyone — tech companies that make AI, consumer rights advocates and companies that use AI tools. But that work broke down, with the parties unable to reach an agreement over how to assign liability in situations where an AI tool is found to have discriminatory impacts.

In the end, legislators simply pushed the decision down the road, agreeing to delay the original law’s effective date by four months.

As things stand, the AI law is set to go into effect in June 2026. But lawmakers will meet again in January to try making adjustments again. Even a sponsor of that original law says it needs fixes and hopes to see a revision.

In the meantime, some tech organizations praise the extra time to workshop the legislation, while consumer advocates say residents are being left unprotected for even longer.

How We Got Here


The Colorado AI Act targets automated decision-making systems used for “consequential decisions” such as who to hire; loan to; or provide with government services, health care, education, insurance or housing. It aims to protect consumers from discrimination or bias, regardless of whether the creators or users of those systems intended to discriminate.

For example, Amazon discovered in 2018 that its automated recruiting tool had taught itself to prefer men over women. The system had examined prior years of job applications in the male-dominated field for patterns and started giving worse scores to resumes that included the word “women’s” or mentioned certain all-women’s colleges. Amazon said at the time that its recruiters had checked the recruiting tool’s recommendations but never made decisions entirely based on them.

If the tool significantly influenced hiring decisions, the law would have held Amazon liable for its unintentionally discriminatory practice. As a deployer of the automated recruiting tool, the company would’ve been required to tell job candidates that AI was being used and allowed rejected candidates to learn what data about them the tool had weighed, as well as to correct any inaccurate data, contest the decision, and ask for human review instead. The law also would’ve required several other mitigating steps from the company.

Some companies worried the AI law as written was too vague, broad and burdensome.

Efforts to revise the law during the regular session fell short and pressure to reach an agreement heated up during the August 2025 special session. Amid several proposals, two bills that would trim down the original law began to seem the most viable.

One was a narrower, bipartisan bill that would simply make companies tell consumers when they’re interacting with AI and clarify that existing consumer protections and civil rights law apply to AI. It would also let the attorney general sue developers or deployers whose use of AI violates the state’s consumer protection act. But as the special session ticked on, that proposal lost steam.

Attention turned to the Democrat-backed Sunshine Act, proposed by two of the original AI bill’s sponsors. Among other things, it would make developers and deployers jointly and severally liable should an AI system discriminate when deployers use it as it was designed.

But finally, that too collapsed when lawmakers again failed to reach a compromise. In the end, they settled on just pushing off the original law’s implementation until June 2026.

“It became impossible to iron out a path forward that works for everyone,” Senate majority leader Robert Rodriguez, and one of the AI law and Sunshine Act’s sponsors, told the Senate at the time.

What Went Wrong?


Rep. Brianna Titone, Colorado AI Act and Sunshine Act sponsor, says one major sticking point was getting stakeholders to agree on what counts as an “automated decision-making system” that should be covered by the law. Loren Furman — president and CEO of the Colorado Chamber of Commerce and part of a task force launched in 2024 to revise the law — says there also wasn’t enough time for businesses to fully review proposed language around what different industries would have to include when disclosing their use of AI.

Even harder, though, was finding an agreement on liability.

“Unfortunately, a major hurdle for the passage of the Sunshine Act was trying to grapple with the responsibility and the liability between the people who developed the tools and the people who use the tools,” says Travis Hall, state director for the Center for Democracy and Technology, which was also part of the task force.

A few-days-long special session was too little time for resolving “a pretty complex legal situation like that,” Furman says. “It’s not a fair timeline. That is something that has to be worked through; you have to get attorneys involved.”

Furman had advocated for having the bill say that the various stakeholders would convene in a moderated working session to find a compromise on liability “but that [suggestion] wasn’t strong enough for” some consumer groups and labor organizations.

Titone says her discussions with representatives of Colorado-founded “smaller venture capitalist tech firms” were fruitful, with companies saying they’d be open to accepting some liability should they make a faulty AI product that causes harm. But larger developers balked.

“Ultimately, we were very close in wordsmithing the definitions and working out the details of the liability. But at the end of the day, the Big Tech companies just said that liability for them was a nonstarter,” Titone says.

Titone and Hall claim that some negotiators didn’t actually want a compromise.

A year after the Colorado AI Act passed, Congress mulled passing a decadelong moratorium on state and local AI regulation. Polis had supported that moratorium, hoping it would lead to a single national policy on AI. After Congress scrapped the proposed ban, the White House released a plan calling for withholding AI-related federal funding to states with “overly burdensome” regulations on the tech.

In this climate, AI developers may feel little pressure to agree to regulations. “Why would the big companies want to work with us? They have the president of the United States behind them,” Titone says. And, she says, current state law makes deployers, not developers, responsible if a tool they used discriminates, giving Big Tech firms little reason to desire change.

In published comments, Brittany Morris Saunders, president and chief executive of the Colorado Technology Association, meanwhile, pointed to different issues, saying that the perspectives of “established innovators and frontline sectors using AI” had been left out of negotiations.

What’s Next?


Morris Saunders supported the delay, saying via email that this allows more time to work out a solution to a complex problem that must satisfy the needs of many different stakeholders. Furman says lawmakers have shown more interest in her mediated working group idea since negotiations during the special session fell apart.

“Right now other states are looking to us as a model of what a healthy AI policy could look like, so we need to get it right,” Furman says.

Furman hopes to see agreement on a revision before the AI Act’s June 2026 implementation date, but, “if we get to a point during the regular session in 2026 where we need to extend the date out again, because there’s no compromise that’s been reached … we can do that.”

Hall, meanwhile, says the delay leaves consumers undefended against potential harms from systems that are already being used. And Titone worries that the longer the wait, the harder it’ll be to establish consumer protections, because in the meantime, AI will become more ubiquitous and tech companies that oppose regulation will become more powerful.

Titone says: “This is a pivotal moment right now. If we don’t do something now, when are we going to do it?”

Jule Pattison-Gordon is a senior staff writer for Governing. Jule previously wrote for Government Technology, PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon.