Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Why Colorado's Rethinking Its Burdensome AI Regulations

There are lessons for other states in Colorado, where policymakers are struggling to walk back legislation that would do more harm than good.

AI illustration
Adobe Stock
When it comes to regulating emerging industries and technologies, it’s always smart to move carefully and thoughtfully. And sometimes it’s best to take a step back.

The latter dynamic is playing out now in Colorado, where last year Democratic Gov. Jared Polis signed legislation known as Consumer Protections for Artificial Intelligence (CPAI) into law, making his state the first to codify curtailing the perceived risks of “algorithmic discrimination.” Since then, however, Polis, Attorney General Phil Weiser and even Denver Mayor Mike Johnston have all called on the legislature to delay or reform the measure before it takes effect next February.

Colorado’s first attempt to improve or delay the legislation failed earlier this month, and now lessons abound for other states considering new AI laws. Most importantly, states should think twice before passing burdensome laws they may soon regret. Instead, they should apply existing frameworks wherever possible and step in only when real, demonstrable harms cannot be addressed through laws already on the books.

At its core, CPAI seeks to mitigate any chance of AI harm in making “consequential decisions” in fields such as education, employment, financial or lending services, government services, housing, insurance, legal services, or health care. In practice, that means endless paperwork for AI developers and deployers. Any carve-outs it offers are limited in scope and ultimately leave most small businesses exposed to the full extent of the regulation, hurting their chances of competing with larger incumbents.

While Colorado worries about AI harming education, health care and government, many other states are finding that AI drives helpful changes in each. In education, 83 percent of K-12 teachers use AI to speed up administrative work and “redirect time toward meaningful instruction.” In health care, AI shows potential to enable quicker, more accurate medicine discovery. In government, transportation departments in Texas and California leverage the technology to beat traffic congestion.

These advancements are made possible through AI innovation. If Colorado’s regulatory regime were to take effect in 2026 as CPAI is currently written, its heavy-handed requirements could push AI firms to migrate elsewhere and threaten the extent to which Colorado consumers could benefit from AI. The Centennial State is in dire need of a course correction.

Responding to criticism of the original law, the original authors introduced the Artificial Intelligence Consumer Protections bill to amend many of the CPAI’s provisions. The amendments narrow the scope of the original law to apply only where use of AI tools violates existing local, state or federal anti-discrimination laws; create several exemptions for developers; limit the definition of artificial intelligence; and eliminate many reporting duties for developers and deployers. The new bill would have streamlined CPAI, only requiring impact assessments relating to violations of labor laws, unfair trade practices or other laws already on the books. Additionally, the amendments would push the effective date back to January 2027. Such a dramatic shift would align Colorado better with states like Texas and Virginia, which are turning away from their original AI governance proposals in favor of more consumer-friendly frameworks.

But the amendments bill was postponed indefinitely and the lawmaking session ended, leaving Colorado no better off today than when it passed the original law last year. Colorado policymakers wanted to set an example of leadership and prudent AI policymaking. Instead, they are demonstrating the dangers of regulating AI prematurely while also becoming a case study in how difficult it can be to correct past legislative mistakes.

Other states should learn from Colorado’s experience. Before passing new laws that may do more harm than good, they should take the time to identify which existing laws can be applied to AI. That can help them avoid passing a law only to find themselves struggling, as Colorado has, to walk it back before its effective date. This more orderly approach can reveal where laws already protect consumers and where there are gaps in existing consumer protections.

Colorado’s effort to correct its AI law is a big step in the right direction, and the state’s lawmakers should continue their efforts, even if that requires addressing the matter in a special session. But its initial, misguided approach to AI regulation teaches a clear lesson for policymaking efficiency in other states: Don’t subject yourself to the same work twice.

Nate Karren is a policy analyst with the American Consumer Institute, a nonprofit education and research organization with a focus on taxation and regulation.



Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.