HB Ad Slot
HB Mobile Ad Slot
Colorado’s Artificial Intelligence Act (CAIA) Updates: A Summary of CAIA’s Consumer Protections When Interacting with Artificial Intelligence Systems
Monday, May 12, 2025

During the 2024 legislative session, the Colorado General Assembly passed Senate Bill 24-205, which is known as the Colorado Artificial Intelligence Act (CAIA). This law will take effect on February 1, 2026, and requires developers and deployers of a high-risk AI system to protect Colorado residents (“consumers”) from risks of algorithmic discrimination. Notably, the Act also requires that developers or deployers must disclose to consumers that they are interacting with an AI system. Colorado Gov. Jared Polis, however, had some concerns in 2024 and expected that the legislators would refine key definitions and update the compliance structure before the effective date in February 2026.

As Colorado moves forward toward implementation, the Colorado AI Impact Task Force issued its recommendations for updates in its February 1, 2025 Report. These updates — along with the description of the Act — are covered below.

Background

A “high-risk” AI system is defined to include any machine-based system that infers outputs from data inputs and has a material legal or similar effect on the provision, denial, cost, or terms of a product or service. The statute identifies various sectors that involve consequential decisions, such as decisions related to healthcare, employment, financial or credit, housing, insurance, or legal services. Additionally, CAIA has numerous carve-outs for technologies that perform narrow tasks or certain functions, such as cybersecurity, data storage, and chatbots.

Outside of use case scenarios, CAIA also imposes on developers of AI systems the duty to prevent algorithmic discrimination and protect consumers from any known or foreseeable risks arising from the use of the AI system. A developer is one that develops or modifies an AI system used in the state of Colorado. Among other things, a developer must make documentation available for the intended uses and potential harmful uses of the high-risk AI system. 

Similarly, CAIA also regulates a person that is doing business in Colorado and deploys a high-risk AI system for Colorado residents to use (the “deployer”). Deployers face stricter regulations and must inform consumers when AI is involved in a consequential decision. The Act requires deployers to implement a risk management policy and program to govern the use of the AI system. Further, the deployers must report any identified discrimination to the Attorney General’s Office within 90 days and must allow consumers to appeal AI-based decisions or request human review of the decision when possible. 

Data Privacy and Consumer Rights

Consumers have the right to opt out of data processing related to AI-based decisions and may appeal any AI-based decisions. This opt-out provision may impact further automated decision-making related to the Colorado resident and the processing of personal data profiling of that consumer. The deployer must also disclose to the consumer when a high-risk AI system has been used in the decision-making process that results in an adverse decision to the consumer. 

Exemptions

The CAIA contains various exemptions, including for entities operating under other regulatory regimes (e.g., insurers, banks, and HIPAA-covered entities) or for the use of certain approved technologies (e.g., technology cleared, approved, or certified by a federal agency, such as the FAA or FDA). But there are some caveats, however. For example, HIPAA-covered entities are exempt to the extent they are providing healthcare recommendations that are generated by an AI system that require the HIPAA-covered entity to take action to implement the recommendation and are not considered to be “high risk.” Small businesses are exempt to the extent that they employ fewer than 50 full-time employees and do not train the system with their own data. Thus, deployers should closely analyze the available exemptions to ensure their activities fall squarely within the recommendations.

Updates

As highlighted in the recent Colorado AI Impact Task Force Report, the report encourages additional changes to CAIA before it is enforced in February 2026. The current concerns deal with ambiguities, compliance burdens, and various stakeholder concerns. The Governor is concerned with whether the guardrails inhibit innovation and AI progress in the State. 

The Colorado AI Impact Task Force notes that there is consensus to refine documentation and notification requirements. However, there is less consensus on how to adjust the definition of “consequential decisions.” Reworking the exemptions to the definition of covered systems is also a change desired by both industry and the public. 

Other potential changes to the CAIA depend on how interconnected sections are potentially revised in relation to other related provisions. For example, changes to the definition of “algorithmic discrimination” depend on other issues related to obligations of developers and deployers to prevent algorithmic discrimination and related enforcement. Similarly, intervals for impact assessments may be affected greatly by changes to the definition of “intentional and substantial modification” to high-risk AI systems. Further, those impact assessments are interrelated with the developer’s risk management programs and will likely implicate any proposed changes to either impact assessments or risk management programs. 

Lastly, there remains firm disagreement on amendments related to several definitions. “Substantial factor” is one debated definition that will take a creative approach to define the scope of AI technologies subject to the CAIA. Similarly, “duty of care” is hotly contested for developers and deployers and whether to remove that concept or replace it with more stringent obligations. Other debated topics that are subject to change include the exemption for small business, the opportunity to cure incidents of non-compliance, trade secret exemptions, consumer right to appeal, and the scope of attorney general rulemaking.

Guidance

Given that most stakeholders recognize that changes are needed, any business impacted by the CAIA should continue to watch the developments in the legislative process for potential changes that could drastically impact the scope and requirements of the Colorado AI Act.

Takeaways

Businesses should assess whether they, or their vendors, use any AI system that could be considered high risk under the CAIA. Some recommendations include:

  • Assess AI usage and consider whether that use is within the definition of the CAIA, including whether any exemptions are available
  • Conduct an AI risk assessment consistent with the Colorado AI Act
  • Develop an AI compliance plan that is consistent with the CAIA consumer protections regarding notification and appeal processes
  • Continue to monitor the updates to the CAIA
  • Evaluate contracts with AI vendors to ensure that necessary documentation is provided by the developer or deployer

Colorado has taken the lead as one of the first states in the nation to enact sweeping AI laws. Other states will likely look to the progress of Colorado and enact similar legislation or make improvements where needed. Therefore, watching the CAIA and its implementation is of great importance in the burgeoning field of consumer-focused AI systems that impact consequential decisions in the consumer’s healthcare, financial well-being, education, housing, or employment.

Listen to this post

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters