Colorado AI Act in plain English
Colorado is the first U.S. state with a broad, AI-specific law targeting high-risk consumer decisions. Here's who's in scope and what the obligations look like in practice.
The Act focuses on 'high-risk AI systems' that make consequential decisions about Coloradans in eight regulated areas — employment, finance, healthcare, housing, insurance, education, legal, and essential government services.
Both developers and deployers of high-risk AI have obligations: risk management, impact assessments, consumer notice, and the ability to appeal decisions.
Most small businesses outside those eight areas are not directly affected — but if you sell into Colorado in any of them, plan ahead.
Who's in scope
The Act applies to developers (those who build or substantially modify high-risk AI systems) and deployers (those who use them) doing business in Colorado.
'High-risk AI system' means an AI system that, when deployed, makes or is a substantial factor in making a consequential decision in: education, employment, financial/lending services, essential government services, healthcare, housing, insurance, or legal services. If your AI helps decide who gets hired, who gets a loan, who gets housing — you're in scope when used on Coloradans.
What deployers (most users) have to do
If you use a vendor's AI to make consequential decisions in any of the eight areas, expect to:
- Maintain a risk management policy and program for the high-risk AI system.
- Complete an annual impact assessment of how the system affects consumers, including bias considerations.
- Notify consumers when a high-risk AI system is making or substantially affecting a consequential decision about them.
- Provide consumers a way to appeal or seek human review of an adverse decision.
- Disclose to the Colorado Attorney General when a high-risk AI system causes algorithmic discrimination.
What developers have to do
If you build or substantially modify high-risk AI, expect heavier obligations: documentation packages for deployers, transparency about training-data limitations, mechanisms for deployers to assess risk, and disclosure to the Colorado AG of any algorithmic discrimination you become aware of.
Penalties and enforcement
The Colorado AG enforces the Act. Penalties are civil and tied to the scale and impact of violations. There is a 'rebuttable presumption' that a deployer used reasonable care if they followed specific compliance steps — which is why the documentation matters.
The Act also includes safe-harbor language for organizations that align with recognized risk management frameworks like NIST AI RMF or ISO 42001. Aligning with NIST is one of the cheapest insurance policies you can buy here.
Plain-English answers
We're not in Colorado. Do we have to care?
When does this take effect?
What if we already follow NIST AI RMF?
Want a hand getting this right?
A 30-minute conversation often saves weeks of guessing. We'll talk through your team, your data, and what to do first — no slide deck required.