HB Ad Slot
HB Mobile Ad Slot
Colorado SB 24-205: On the Verge of Addressing AI Risk with Sweeping Consumer Protection Law
Friday, May 17, 2024

On May 17, 2024, Colorado Governor Jared Polis signed into law SB 24-205—concerning consumer protections in interactions with artificial intelligence systems—after the Senate passed the bill on May 3. The law adds a new part 17, “Artificial Intelligence,” to Article I, Title 6 of the Colorado Consumer Protection Act, to take effect on February 1, 2026. This makes Colorado “among the first in the country to attempt to regulate the burgeoning artificial intelligence industry on such a scale,” Polis said in a letter to the Colorado General Assembly.

The new law will prevent algorithmic discrimination affecting “consequential decisions”—including those with a material, legal, or similarly significant effect on the provision or denial, or the cost or terms of health care services and employment decision-making.

The governor signed “with reservations,” however, urging sponsors, stakeholders, industry leaders and more to “fine tune” the measure over the next two years in order to sufficiently protect technology, competition, and innovation in the state. Polis also suggested that “protecting consumers from discrimination and other unintended consequences of nascent AI technologies” may be better left to the federal government. “Laws that seek to prevent discrimination generally focus on prohibiting intentional discriminatory conduct,” Polis stated. “Notably, this bill deviates from that practice by regulating the results of AI system use, regardless of intent, and I encourage the legislature to reexamine this concept as the law is finalized before it takes effect in 2026.”

Similar to the Colorado law, Connecticut’s SB 2 also required developers and deployers of high-risk AI to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. The Connecticut bill passed the Senate and remained in the House when that legislative session ended.

Although AI innovation has outpaced regulation, it is clear state lawmakers are aiming to catch up to protect consumers. The Colorado law will be the first sweeping AI law enacted at the state level, taking into account AI governance principles related to reliability/safety, bias/non-discrimination, transparency, explainability, and accountability.

The following sets forth key provisions of the Colorado bill:

1. Duties of Developers

With limited exceptions, a developer of a high-risk AI system shall make available to the deployer or other developer of the high-risk AI system:

  • A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system;
  • Documentation disclosing, among other things (list provided here is not exhaustive)
    • high-level summaries of the type of data used to train the high-risk AI system;
    • known or reasonably foreseeable limitations of the high-risk AI system, including known or reasonably foreseeable risks of algorithmic discrimination arising from intended uses;
    • the purpose of the high-risk AI system;
    • the intended benefits and uses of the high-risk AI system;
    • how the high-risk AI system was evaluated for performance and mitigation of algorithmic discrimination;
    • data governance measures, including screening for possible biases and appropriate mitigation;
    • measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination;
    • how the high-risk AI system should be used, not used, and monitored when used to make, or is a substantial factor in making, a consequential decision; and
    • any additional documentation reasonably necessary to assist the deployer in understanding outputs/monitoring performance of the high-risk AI for risks of algorithmic discrimination.

Additional Documentation and Information. Developers will be required to make available to a deployer or other developer documentation and information necessary for a deployer to create its required impact assessments (see Duties of Deployers, below). However, a developer that also serves as a deployer for a high-risk AI system is not required to generate the documentation required by the system unless the high-risk AI system is provided to an unaffiliated entity acting as a deployer.

Websites. A developer will be required make available, in a manner that is clear and readily available on the developer’s website or in a public use case inventory, a statement summarizing:

  • the types of high-risk AI systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and
  • how the developer manages known or reasonably foreseeable risks of algorithmic discrimination.
  • The statement must be updated as necessary for accuracy no later than 90 days after the developer intentionally and substantially modifies any high-risk AI system.

Notice to Attorney General/Developers/Deployers. A developer of a high-risk AI system must disclose to the attorney general, and to all known deployers or other developers of the system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the system no later than 90 days after the date on which:

  • the developer discovers through testing/analysis that the high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination; or
  • the developer receives from a deployer a credible report that the high-risk AI system has been deployed and has caused algorithmic discrimination.

2. Duties of Deployers

Risk Management Policy/Program. A deployer of a high-risk AI system shall implement a risk management policy and program to govern the deployment of the high-risk AI system. The policy and program must:

  • specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination;
  • must be planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates; and
  • must be reasonable considering the following:
    • the guidance and standards set forth in the latest version of the “artificial intelligence risk management framework” published by the National Institutes of Standards and Technology, another nationally or internationally recognized risk management framework for AI systems (if the standards are substantially equivalent to or more stringent than the requirements of this part), or any risk management framework for AI systems that the attorney general may designate;
    • the size and complexity of the deployer;
    • the nature and scope of the high-risk artificial intelligence systems deployed, including the intended uses of the high-risk AI systems;
    • the sensitivity and volume of data processed in connection with the high-risk AI systems deployed by the deployer; and
    • may cover multiple high-risk AI systems deployed by the deployer, including the intended uses.

Impact Assessment. A deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system will be required to complete an impact assessment for the high-risk AI system, at least annually and within 90 days after any intentional and substantial modification to the high-risk AI system is made available. Each impact assessment must include, at a minimum:

  • a statement disclosing the purpose, intended use cases, deployment context of, and benefits afforded by, the high-risk AI system;
  • an analysis of whether the deployment of the high-risk AI system poses any known or reasonably foreseeable risks of algorithmic discrimination, and if so, the nature and the steps that have been taken to mitigate the risks;
  • a description of the categories of data the AI system processes as inputs and the outputs the system produces;
  • an overview of the categories of data the deployer used to customize the high-risk AI system;
  • metrics used and transparency measures taken concerning the high-risk AI system; and
  • a description of the post-deployment monitoring and user safeguards provided concerning the high-risk AI system, including the oversight, use, and learning process.

Modifications. An impact assessment following an intentional and substantial modification to a high-risk AI system must include a statement disclosing the extent to which the high-risk AI system was used in a manner that was consistent with, or varied from, the developer’s intended uses.

Number of Assessments. A single assessment may assess a comparable set of high-risk AI systems; and a reasonably similar impact assessment completed for another law or regulation may suffice.

Review and Recordkeeping. The most recently completed impact assessment, and all prior impact assessments, must be maintained for at least three years following the final deployment. The deployer or a third-party contracted by the deployer must review the deployment of each high-risk AI system at least annually to ensure that it is not causing algorithmic discrimination.

Notification to Consumers. No later than the time of deployment of a high-risk AI system, the deployer must:

  • notify the consumer that the deployer has deployed a high-risk AI system to make, or be a substantial factor in making, a consequential decision;
  • provide to the consumer a statement disclosing the purpose of the high-risk AI system and the nature of the consequential decision;
  • provide contact information for the deployer;
  • provide a plain-language description of the high-risk AI system; with instructions on how to access the statement;
  • provide to the consumer information regarding the consumer’s right to opt out of the processing of personal data concerning the consumer.

A deployer that has made a consequential decision that is adverse to a consumer must also provide to the consumer:

  • a statement disclosing the principal reason or reasons for the consequential decision, including the degree to which the AI system contributed, the type and sources of data;
  • an opportunity to correct any incorrect personal information that the AI system processed; and
  • an opportunity to appeal an adverse consequential decision.

Required notice, statement, contact information, and description shall generally be made directly to the consumer, in plain language, in all languages which the deployer typically uses in the ordinary course of business, and in a format that is accessible to consumers with disabilities.

Websites. Like developers, deployers also have specific requirements concerning statements on their websites concerning the types of AI systems currently deployed and how risks of algorithmic discrimination are managed, among other requirements.

Exceptions. The provisions concerning the AI risk management policies and programs; impact assessments; and the website notices do not apply to deployers in the following limited contexts:

  • The developer employs fewer than 50 full-time employees and does not use the employer’s own data to train the high-risk AI system;
  • The high-risk AI system is used for the intended uses and continues learning is based on data derived from sources other than the deployer’s own data; and
  • the deployer makes available to consumers any impact assessment that the developer of the high-risk AI system has completed.

Discovery. Like developers, if a deployer subsequently discovers that the system has caused algorithmic discrimination, the deployer, no later than 90 days after the date of the discovery, must make a disclosure to the attorney general.

Additional Disclosures to Consumers. A deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an AI system that is intended to interact with consumers must disclose to each consumer that they are interacting with an AI system. Disclosure is not required, however, where it would be obvious to a reasonable person that the person is interacting with an AI system.

3. Rebuttable Presumptions/Affirmative Defenses

There is a rebuttable presumption afforded to developers that use reasonable care by complying with any additional requirements set forth in rules promulgated by the attorney general or with specific provisions in the bill, including disclosing to the attorney general and known deployers of the high-risk system any known or reasonably foreseeable risk of algorithmic discrimination, within ninety (90) days after the discovery or receipt of a credible report from the deployer, that the high-risk system has caused or is reasonably likely to have caused.

Deployers are also afforded a rebuttable presumption by disclosing to the attorney general the discovery of algorithmic discrimination, within ninety (90) days after the discovery, and complying with specific provisions of the bill. Developers and deployers alike have an affirmative defense with respect to both high-risk and generative systems if they have implemented and maintained a program that is in compliance with a nationally or internationally recognized risk management framework for AI systems that the bill or the attorney general designates, and the developer or deployer takes specified measures to discover and correct violations of the bill.

Key Takeaways

Those developing and/or deploying AI should be aware that the Colorado measure is part of a multi-state push to prevent algorithmic discrimination in the use of AI—where the use of the system results in unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of “age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification” protected under state or federal law.

As civil rights-oriented groups push this issue forward, and business and tech leaders push back, lawmakers in Connecticut and Colorado—but also in other states—have found themselves in the middle of a national debate as to whether AI measures like these prevent discrimination or stifle innovation. As the federal government works to create law in this space, the states will likely push these issues forward on a piecemeal basis.

In his May 8 remarks at an AI Expo for National Competitiveness, Senate Majority Leader Charles Schumer (D-NY) said he hopes the Health Committee, among others, will start having hearings and drafting AI legislation in the summer and fall. Schumer is part of a Senate AI Working Group (“Working Group”) that issued a roadmap for “Driving U.S. Innovation in Artificial Intelligence” on May 15, in the hopes of stimulating momentum for new and ongoing consideration for bipartisan AI legislation. In a section on “High Impact Uses of AI,” the Working Group said that “existing laws, including [those] related to consumer protection and civil rights, need to consistently and effectively apply to AI systems and their and their developers, deployers, and users.” Whether Congress is ready to legislate on AI outside of elections, however, remains to be seen.

“AI is unlike anything Congress has ever dealt with before,” Schumer said on May 8. “You know, if it’s health care or defense, we have a long track record. We know what’s happened. We know how we’ve interacted. But this is all brand new.”

Now that the Colorado law is signed, organizations should start to consider compliance issues including policy development, engagement with AI auditors, contract language in AI vendor agreements to reflect responsibilities and coordination, and more. EBG will continue to update you on the latest in AI as developments occur. Please see our latest blogs on the subject of AI resume screening tools and federal anti-discrimination laws, as well as on the extension of antidiscrimination provisions of the Affordable Care Act to patient care decision support tools, including algorithms. See also, for example, materials on New York City’s Local Law 144 (on automated employment decision tools, an earlier blog may be found here); San Francisco’s generative AI guidelines for city workers; insurance underwriting and pricing in New York State; and “Achieving Legal Compliance in AI: Minimizing Bias in Algorithms.”

Originally published on May 17, 2024, and updated on May 20, 2024, to reflect the bill’s signing into law.

Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins