The clock is running. Colorado SB 24-205 — the Colorado Artificial Intelligence Act — takes effect June 30, 2026, less than 90 days away. It is the most operationally detailed state AI law currently in force in the United States, and unlike the wave of federal proposals circulating in Washington, it is law right now. There is no federal preemption in effect, no court stay, and no small business exemption.
For small teams deploying AI in any consequential decision context, this is the most urgent near-term compliance deadline in the US. Here is what the law requires and what you need to have ready.
Key Takeaways
- The Colorado AI Act enforcement date is June 30, 2026 — under 90 days away. There is no extension and no federal override currently in effect.
- The law applies to both developers (who build or substantially modify high-risk AI) and deployers (who use high-risk AI in consequential decisions affecting Colorado residents).
- Covered domains: employment, education, financial services, essential services, healthcare, housing, and legal services. If your AI touches any of these, it may be in scope.
- Required by June 30: risk assessment, bias disclosure, transparency statement, and a human review mechanism for affected individuals.
- Violations are deceptive trade practices — up to $20,000 per violation. The AG may offer a 60-day cure window for good-faith efforts.
- The federal AI preemption proposals currently circulating do not override Colorado — compliance is mandatory today.
Summary
The Colorado AI Act is the first US state law with a comprehensive risk-based framework modeled loosely on the EU AI Act. Unlike simpler disclosure-only laws, it requires substantive governance infrastructure: documented risk assessments, active bias monitoring, public transparency statements, and individual appeal mechanisms. For small teams that have deployed AI in HR screening, loan decisions, healthcare triage, or essential services without building corresponding governance, June 30 is a hard deadline that requires action now — not in Q3.
What the Law Actually Requires
Who is covered:
The law applies to two categories of organization:
- Developers: entities that develop, substantially modify, or make available to the public a high-risk AI system. "Substantially modify" means changes that affect the AI's consequential decision-making behavior.
- Deployers: entities that deploy a high-risk AI system in Colorado for consequential decisions affecting Colorado residents.
Both categories have distinct obligations. Using a vendor's AI system makes you a deployer. Building your own makes you a developer. Modifying a vendor's AI for your use case may make you both.
What is a high-risk AI system:
A system that:
- Makes or materially contributes to a consequential decision — a decision that has a material legal or similarly significant effect on an individual's access to, or the cost, terms, or availability of, a covered service
- In a covered domain: employment, education, financial services, essential services (food, shelter, transportation, utilities), government services, healthcare, housing, or legal services
- And presents a material risk of algorithmic discrimination based on a protected characteristic
Not every AI system is high-risk under this definition. A general-purpose productivity tool that does not make consequential decisions in covered domains is not covered. The key question is: does this system make or materially influence decisions that could meaningfully harm individuals in ways that correlate with protected characteristics?
Developer obligations:
- Make available to deployers: documentation of the system's known limitations, the data used to develop it, an explanation of its decisions, and instructions for appropriate use
- Disclose known risks of algorithmic discrimination
- Report known instances of algorithmic discrimination to the Colorado AG within 90 days of discovery
Deployer obligations:
- Implement a risk management policy governing the use of the high-risk AI system
- Conduct and document a pre-deployment impact assessment
- Complete annual post-deployment assessments of the system's performance, including bias monitoring
- Publish a transparency statement on the deployer's website disclosing the types of high-risk AI deployed, their purposes, and how individuals can seek human review
- Provide individual notice to any person subject to a consequential decision influenced by high-risk AI, including the principal reason for the decision
- Offer a human review mechanism: a meaningful process through which individuals can appeal or request reconsideration of AI-influenced decisions
What "Material Risk of Algorithmic Discrimination" Means in Practice
The law uses "material risk of algorithmic discrimination" as the threshold for coverage. This does not mean your system has been shown to discriminate — it means it could, based on the context in which it is deployed.
The Colorado AG's guidance treats several deployment contexts as carrying inherent material risk:
- AI systems trained on historical data where the population was historically underrepresented or discriminated against
- AI systems using proxy variables that correlate with protected characteristics (credit score proxies, zip code, certain educational credentials)
- AI systems whose training data was not audited for representation bias
If you cannot positively rule out material risk of algorithmic discrimination, treat the system as high-risk and comply accordingly. The documentation burden of demonstrating no material risk is typically higher than completing the impact assessment required for high-risk systems.
Why Small Teams Are Particularly Exposed
Three patterns create outsized risk for small organizations:
Third-party AI deployed without oversight documentation. A startup uses a third-party AI screening tool to filter job applications. The vendor's documentation is a one-page API description. There is no impact assessment, no transparency statement, and no process to explain to a rejected applicant why their application was filtered. This is a textbook Colorado AI Act violation — as a deployer, the startup owns the compliance obligation regardless of who built the tool. The hidden AI features governance gap is where most small team violations originate.
No human review mechanism. Many small teams use AI to make fast decisions at scale precisely because they do not have the staffing to review each decision manually. The Colorado AI Act does not require reviewing every decision — it requires a mechanism for individuals to request human review of decisions that affect them. This is a process requirement, not a staffing requirement. It can be as simple as a contact email, a documented review workflow, and a committed response time.
Transparency statement not drafted. The law requires a public transparency statement before deployment, not when an investigation begins. A team that has been using a high-risk AI since 2024 and has never published such a statement faces retroactive exposure for every consequential decision made since then.
Governance Goals for June 30
For a small team to be defensibly compliant by the deadline:
- AI system inventory completed and classified: every AI system assessed against the high-risk definition; classification rationale documented
- Impact assessments completed: for every system that qualified as high-risk, a pre-deployment assessment completed retroactively and documented
- Risk management policy in place: a written policy governing how high-risk AI is used, monitored, and reviewed
- Transparency statement published: live on the website before June 30
- Individual notice process defined: a documented process for notifying individuals of AI-influenced consequential decisions
- Human review mechanism operational: a real, functioning process — not just a policy document — through which individuals can request reconsideration
Controls: What to Actually Do
This week:
- Map every AI system your organization uses. Flag any that operate in the covered domains: employment, financial services, healthcare, housing, education, essential services, legal services.
- For each flagged system, apply the two-question test: (1) Does it make or materially contribute to a consequential decision? (2) Is there a material risk of algorithmic discrimination? If both answers are yes, it is high-risk.
- Check your vendor contracts. Do they provide the developer documentation the law requires deployers to obtain?
This month:
- Complete a risk assessment for each high-risk AI system — use the AI governance checklist as a starting framework. Document methodology, scope, findings, and any mitigation decisions made.
- Draft the individual notice language you would use if a person asked why an AI-influenced decision was made about them.
- Write a risk management policy. At minimum it should cover: who owns oversight of each high-risk AI, how performance is monitored, what triggers escalation or review, and how bias concerns are handled.
Before June 30:
- Publish the transparency statement to your website.
- Stand up the human review mechanism and document that it is operational.
- Complete post-deployment bias monitoring for any high-risk AI that has been in production. If you find a discrimination issue, the 90-day AG disclosure clock starts when you discover it — not when the AG does.
Checklist (Copy/Paste)
- Inventory all AI systems; classify each against the high-risk definition
- Document classification rationale for each system
- Obtain developer documentation from vendors for all third-party high-risk AI
- Complete pre-deployment impact assessments (retroactively if already deployed)
- Conduct bias monitoring and document findings
- Write risk management policy covering oversight, monitoring, escalation, review
- Draft individual notice language for AI-influenced decisions
- Implement and test the human review mechanism
- Publish transparency statement to website before June 30
- Document the date of publication and content as evidence of compliance
- Establish annual post-deployment assessment cadence
Implementation Steps
- Day 1-3: Run an AI inventory session. Pull every tool from expense systems, IT asset registers, and engineering wikis. Flag anything used in hiring, lending, healthcare triage, housing decisions, benefits eligibility, or access to essential services.
- Week 1: Apply the high-risk classification test to each flagged tool. Document the analysis. If uncertain, classify as high-risk — the cure period protects good-faith efforts.
- Week 2: Request developer documentation from vendors. Track responses. A vendor that cannot provide impact assessment support may create deployer-level risk you need to escalate to legal.
- Week 2-3: Complete impact assessments for each high-risk system. Use the AI risk assessment framework as your template.
- Week 3-4: Write the transparency statement and risk management policy. Have legal review before publishing.
- Before June 30: Publish the transparency statement. Stand up the human review mechanism. Run one end-to-end test of the human review process and document it.
- Ongoing: Schedule annual post-deployment assessments for each high-risk AI. Assign a named owner for each.
Frequently Asked Questions
Q: We are a SaaS company and our AI product is used by other businesses to make consequential decisions. Are we a developer or a deployer? A: You are a developer. Your customers are the deployers. You have developer obligations: you must provide documentation to your deployers (limitations, training data summary, decision explanation capability) and disclose known risks of algorithmic discrimination. If you substantially modify the system based on customer configurations, you may share deployer obligations too.
Q: Our AI makes recommendations but a human always makes the final decision. Are we still covered? A: Possibly. The law covers AI that "materially contributes" to a consequential decision — not just AI that makes the decision autonomously. If the human decision-maker routinely accepts the AI recommendation without independent analysis, regulators are likely to treat it as a material contribution.
Q: We are not headquartered in Colorado but serve Colorado customers. Does the law apply? A: Yes. Jurisdiction is based on where affected individuals are located, not where the company is based. If Colorado residents are subject to consequential decisions made using your AI, you are covered.
Q: What should we do if we discover our AI has been discriminating? A: The law requires disclosure to the Colorado AG within 90 days of discovery. Stop using the model in production, document the discovery and investigation, implement remediation, and engage legal counsel before disclosure. The AG's enforcement posture has explicitly acknowledged good-faith remediation as a mitigating factor.
Q: Can the White House preemption framework invalidate the Colorado AI Act? A: Not today. The White House framework is a set of legislative recommendations to Congress — it has no legal effect on the Colorado AI Act until Congress passes and the President signs a preemption statute. That has not happened, and the timeline is uncertain. Comply with Colorado now.
References
- Colorado AI Act full text — SB 24-205: https://leg.colorado.gov/bills/sb24-205
- Colorado's Landmark AI Law Coming Online (Brownstein Hyatt): https://www.bhfs.com/insight/colorados-landmark-ai-law-coming-online-what-developers-and-deployers-should-know/
- Complete Guide to the Colorado AI Act 2026 (Glacis): https://www.glacis.io/guide-colorado-ai-act
- Navigating the AI Employment Landscape in 2026 (K&L Gates): https://www.klgates.com/Navigating-the-AI-Employment-Landscape-in-2026-Considerations-and-Best-Practices-for-Employers-2-2-2026
- NIST AI Risk Management Framework 1.0: https://www.nist.gov/system/files/documents/2023/01/26/AI%20RMF%201.0.pdf
