In Brief:
- Colorado had been trying for years to revamp its never-implemented Colorado AI Act. That 2024 law had required companies to take steps to prevent discrimination when AI was used to make important decisions about people.
- Digital rights advocates say the new law provides important safeguards — but not as many as the original law would have.
- Companies are mixed on the legislation. The American Fintech Council supported the bill, while the Business Software Alliance worried the law could put too much liability on developers.
Colorado just ended years of debate over how to revamp its first-in-the-nation artificial intelligence consumer protection law. Although the Colorado AI Act passed in 2024, the legislation was never implemented. Gov. Jared Polis signed it with the understanding that it would be revised before being enacted. But finding an acceptable revision proved elusive, and lawmakers ran out of time during a special session last year.
National players then added heat: last month, the Department of Justice intervened to support a lawsuit from Elon Musk’s xAI company challenging the Colorado AI Act. Ultimately, the suit resulted in a temporary halt on enforcing the law, which had been slated to go into effect this June.
That saga finally concluded Tuesday, when the legislature passed a compromise bill, SB 189, that would replace the 2024 policy and go into effect in 2027. Polis is expected to sign the bill.
The measure regulates situations where an AI system is used to help make “consequential” decisions, like whether someone gets loans, government services, healthcare, housing or insurance. It takes a lighter-touch approach than the 2024 law.
“This bill is probably going to be used as a model in other states, so creating this balancing act was really important,” says Loren Furman, CEO of the Colorado Chamber of Commerce and a member of the AI policy working group the governor appointed to recommend a new approach after legislative negotiations stalled out last year. It took the working group six months and 11 drafts to settle on a proposal, and SB 189 enshrines all the working group’s recommendations.
SB 189 garnered strong support in the House and Senate — passing with votes of 57-6 and 34-1, respectively — and was sponsored by top leadership: Senate Majority Leader Robert Rodriguez, Senate President James Coleman, House Majority Leader Monica Duran and House Assistant Majority Leader Jennifer Bacon.
Even so, the law is very much a compromise, even for the bill sponsors.
“Everybody lost and everybody won,” Rodriguez said of SB 189’s passing. Earlier, he said that, compared to the 2024 Colorado AI Act, SB 189 is “not as comprehensive, and I am not happy with that.”
A Lighter Touch
The 2024 Colorado AI Act had required the makers and deployers of AI systems to take certain steps to reduce the risks of the algorithms having discriminatory outputs, regardless of whether the discrimination was intentional. For example, companies deploying high-risk AI systems had to conduct impact assessments, create a risk management plan and annually review the systems to ensure they weren’t causing discrimination.
SB 189 drops those requirements. The law primarily requires companies to alert people if AI is being used to make decisions about them, relying on the state’s existing anti-discrimination policies to protect people should discrimination occur.
“It's no longer really a discrimination bill,” says Travis Hall, director for state engagement at the Center for Democracy and Technology, a nonprofit focused on digital rights. “It is now almost entirely a disclosure-focused bill.”
People who are negatively impacted by an AI-made decision can request to see what personal data the AI system considered, correct any inaccuracies and request a human re-review of the decision. For example, a rejected job applicant or a denied loan-seeker could make sure the AI system didn’t base its decision on faulty information about them.
“Overall, it is still a step forward,” Hall says. “It is much less of a step forward than [the Colorado AI Act].”
Risk assessments were a contentious topic for the working group, Furman says. The original law said that companies had to take “reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination,” and that they would generally count as having done so if they took certain steps, including creating an impact assessment. These assessments had to include an analysis of the risk that an AI system could discriminate.
Small businesses were worried about having to conduct such assessments, Furman says, and some labor and consumer groups were also opposed to leaning too heavily on these assessments for protection. Ultimately, the working group decided not to include risk assessments in its recommended bill.
Under the new law, AI developers must give some information to deployers about how the AI system is supposed to be used, what categories of data it was trained on and any known limitations of the systems.
The new law also splits liability for any potential algorithmic discrimination (a previous point of contention on the legislation): if deployers use an AI system in the way it was designed and intended and the results are still discriminatory, the system’s developer is at fault; if the deployer doesn’t use the AI tool as it was meant to be used, the deployer is on the hook.
Meghan Pensyl — director of policy for the Business Software Alliance, an international trade organization for the enterprise software sector — praised some parts of the law, but she worried other modifications could expand the law’s scope in unwarranted ways, for example by potentially applying to AI systems that only help facilitate services but don’t actually determine access to them. Pensyl also says some provisions risk developers being held responsible for actions of deployers.
“189 is really a mixed bag,” Pensyl says.
Companies also will have some time to figure it out before they’re expected to be fully in compliance. The law goes into effect next year, and, for the first three years after the law is implemented, companies will have a 60-day right to cure.