HB Ad Slot
HB Mobile Ad Slot
The Colorado AI Act: Implications for Health Care Providers
Friday, February 7, 2025

Artificial intelligence (AI) is increasingly being integrated into health care operations, from administrative functions such as scheduling and billing to clinical decision-making, including diagnosis and treatment recommendations. Although AI offers significant benefits, concerns regarding bias, transparency, and accountability have prompted regulatory responses. Colorado’s Artificial Intelligence Act (the Act), set to take effect on February 1, 2026, imposes governance and disclosure requirements on entities deploying high-risk AI systems, particularly those involved in consequential decisions affecting health care services and other critical areas.

Given the Act’s broad applicability, including its potential extraterritorial reach for entities conducting business in Colorado, health care providers must proactively assess their AI utilization and prepare for compliance with forthcoming regulations. Below, we discuss the intent of the Act, what types of AI it applies to, future regulation, potential impact on providers, statutory compliance requirements, and enforcement mechanisms.

1. What Is the Act Trying to Protect Against?

The Act primarily seeks to mitigate algorithmic discrimination, defined as AI-driven decision-making that results in unlawful differential treatment or disparate impact on individuals based on certain characteristics, such as race, disability, age, or language proficiency. The Act seeks to prevent AI from reinforcing existing biases or making decisions that unfairly disadvantage particular groups.

Examples of Algorithmic Discrimination in Health Care

  • Access to Care Issues: AI-powered phone scheduling systems may fail to recognize certain accents or accurately process non-English speakers, making it more difficult for non-native English speakers to schedule medical appointments.
  • Biased Diagnostic Tools and Treatment Recommendations: Some AI diagnostic tools may recommend different treatments for patients of different ethnicities, not because of medical evidence but due to biases in the training data. For instance, an AI model trained primarily on data from white patients might miss early signs of disease that present differently in Black or Hispanic patients, resulting in inaccurate or less effective treatment recommendations for historically marginalized populations.
  • By targeting these and other AI-driven inequities, the Act aims to ensure automated systems do not reinforce or exacerbate existing disparities in health care access and outcomes.

2. What Types of AI Are Addressed by the Act?

The Act applies broadly to businesses using AI to interact with or make decisions about Colorado residents. Although certain high-risk AI systems — those that play a substantial factor in making consequential decisions — are subject to more stringent requirements, the Act imposes obligations on most AI systems used in health care.

Key Definitions in the Act

  • “Artificial Intelligence System” means any machine-based system that generates outputs — such as decisions, predictions, or recommendations — that can influence real-world environments.
  • “Consequential Decision” means a decision that materially affects a consumer’s access to or cost of health care, insurance, or other essential services.
  • “High-Risk AI System” means any AI tool that makes or substantially influences a consequential decision.
  • “Substantial Factor” means a factor that assists in making a consequential decision or is capable of altering the outcome of a consequential decision and is generated by an AI system.
  • Developers” means creators of AI systems.
  • Deployers” means users of high-risk AI systems.

3. How Can Health Care Providers Ensure Compliance?

Although the Act sets out broad obligations, specific regulations are still forthcoming. The Colorado Attorney General has been tasked with developing rules to clarify compliance requirements. These regulations may address:

  • Risk management and compliance frameworks for AI systems.
  • Disclosure requirements for AI usage in consumer-facing applications.
  • Guidance on evaluating and mitigating algorithmic discrimination.

Health care providers should monitor developments as the regulatory framework evolves to ensure their AI-related practices align with state law.

4. How Could the Act Impact Health Care Operations?

The Act will require health care providers to specifically evaluate how they use AI across various operational areas, as the Act applies broadly to any AI system that influences decision-making. Given AI’s growing role in patient care, administrative functions, and financial operations, health care organizations should anticipate compliance obligations in multiple domains.

Billing and Collections

  • AI-driven billing and claims processing systems should be reviewed for potential biases that could disproportionately target specific patient demographics for debt collection efforts.
  • Deployers should ensure that their AI systems do not inadvertently create financial barriers for specific patient groups.

Scheduling and Patient Access

  • AI-powered scheduling assistants must be designed to accommodate patients with disabilities and limited English proficiency to prevent inadvertent discrimination and delayed access to care.
  • Providers must evaluate whether their AI tools prioritize certain patients over others in a way that could be deemed discriminatory.

Clinical Decision-Making and Diagnosis

  • AI diagnostic tools must be validated to ensure they do not produce biased outcomes for different demographic groups.
  • Health care organizations using AI-assisted triage tools should establish protocols for reviewing AI-generated recommendations to ensure fairness and accuracy.

5. If You Use AI, With What Do You Need to Comply?

The Act establishes different obligations for Developers and Deployers. Health care providers will in most cases be “Deployers” of AI systems as opposed to Developers. Health care providers will want to scrutinize contractual relationships with Developers for appropriate risk allocation and information sharing as providers implement AI tools into their operations.

  • Obligations of Developers (AI Vendors)
    • Disclosures to Deployers: Developers must provide transparency about the AI system’s training data, known biases, and intended use cases.
    • Risk Mitigation: Developers must document efforts to minimize algorithmic discrimination.
    • Impact Assessments: Developers must evaluate whether the AI system poses risks of discrimination before deploying it.
  • Obligations of Deployers (e.g., Health Care Providers)
    • Duty to Avoid Algorithmic Discrimination
      • Deployers of high-risk AI systems must use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination.
    • Risk Management Policy & Program
      • Deployers must implement a risk management policy and program that identifies, documents, and mitigates risks of algorithmic discrimination.
      • The program must be iterative, regularly updated, and aligned with recognized AI risk management frameworks.
      • Requirements vary based on the deployer’s size, complexity, AI system scope, and data sensitivity.
    • Impact Assessments (Regular & Event-Triggered Reviews)
      • Timing Requirements: Deployers must conduct impact assessments:
        • Before deploying any high-risk AI system.
        • At least annually for each deployed high-risk AI system.
        • Within 90 days after any intentional and substantial modification to the AI system.
      • Required Content: Each impact assessment must include the AI system’s purpose, intended use, and benefits, an analysis of risks of algorithmic discrimination and mitigation measures, a description of data processed (inputs, outputs, and any customization data), performance metrics and system limitations, transparency measures (including consumer disclosures), and details on post-deployment monitoring and safeguards.
      • Special Requirements for Modifications: If an impact assessment is conducted due to a substantial modification, it must also include an explanation of how the AI system’s actual use aligned with or deviated from its originally intended purpose.
    • Notifications & Transparency
      • Public Notice: Deployers must publish a statement on their website describing the high-risk AI systems they use and how they manage discrimination risks.
      • Notices to Patients/Employees: Before an AI system makes a consequential decision, individuals must be notified of its use.
      • Post-Decision Explanation: If AI contributes to an adverse decision, deployers must explain its role and allow the individual to appeal or correct inaccurate data.
      • Attorney General Notifications: If AI is found to have caused algorithmic discrimination, deployers must notify the Attorney General within 90 days.

Small deployers (those with fewer than 50 employees) who do not train AI models with their own data are exempt from many of these compliance obligations.

6. How is the Act Enforced?

  • Only the Colorado Attorney General has enforcement authority.
  • A rebuttable presumption of compliance exists if Deployers follow recognized AI risk management frameworks.
  • There is no private right of action, meaning consumers cannot sue directly under the Act.

Health care providers should take early action to assess their AI usage and implement compliance measures.

Final Thoughts: What Health Care Providers Should Do Now

  • The Act represents a significant shift in AI regulation, particularly for health care providers who increasingly rely on AI-driven tools for patient care, administrative functions, and financial operations.
  • Although the Act aims to enhance transparency and mitigate algorithmic discrimination, it also imposes substantial compliance obligations. Health care organizations will have to assess their AI usage, implement risk management protocols, and maintain detailed documentation.
  • Given the evolving regulatory landscape, health care providers should take a proactive approach by auditing existing AI systems, training staff on compliance requirements, and establishing governance frameworks that align with best practices. As rulemaking by the Colorado Attorney General progresses, staying informed about additional regulatory requirements will be critical to ensuring compliance and avoiding enforcement risks.
  • Ultimately, the Act reflects a broader trend toward AI regulation that is likely to extend beyond state borders. Health care organizations that invest in AI governance now will not only mitigate legal risks but also maintain patient trust in an increasingly AI-driven industry.
  • If health care providers plan to integrate AI systems into their operations, conducting a thorough legal analysis is essential to determine whether the Act applies to their specific use cases. This should also include careful review and negotiation of service agreements with AI Developers to ensure that the provider has sufficient information and cooperation from the Developer to comply with the Act and to properly allocate risk between the parties.

Compliance is not a one-size-fits-all process. It requires careful evaluation of AI tools, their functions, and their potential to influence consequential decisions. Organizations should work closely with legal counsel to navigate the Act’s complexities, implement risk management frameworks, and establish protocols for ongoing compliance. As AI regulations evolve, proactive legal assessment will be crucial to ensuring that health care providers not only meet regulatory requirements but also uphold ethical and equitable AI practices that align with broader industry standards.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins