HB Ad Slot
HB Mobile Ad Slot
Colorado AI Systems Regulation: What Health Care Deployers and Developers Need to Know
Thursday, June 27, 2024

As the first state law to regulate the results of Artificial Intelligence System (AI System) use, Colorado’s SB24-205, “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (the Act), has generated plenty of cross-industry interest, for good reason. In some ways similar to the risk-based approach taken by the European Union (EU) in the EU AI Act, the Act aims to regulate developers and deployers of AI Systems, which are defined by the Act as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”

The Act is scheduled to go into effect on February 1, 2026 and its scope will be limited to activities in Colorado, entities doing business in Colorado, or entities whose activities affect Colorado residents. It generally focuses on regulation of "high-risk” AI Systems, which are defined as any AI System that, when deployed, makes, or is a substantial factor in making, a consequential decision. A “consequential decision” means a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of, among other services, health care services.

Deployer and Developer Requirements

Both developers and deployers of high-risk AI Systems must use reasonable care to protect consumers from any known or reasonably foreseeable risks of "algorithmic discrimination".[1] The Act also imposes certain obligations upon developers of high-risk AI systems, including disclosure of information to deployers; publication of summaries of the types of the developer’s high-risk AI systems and how they manage any foreseeable risks; and disclosure to the Colorado Attorney General (AG) of “any known or reasonably foreseeable risks” of algorithmic discrimination arising from the intended uses of the high-risk AI system within 90 days of discovery. Deployers will need to implement risk management policies and programs to govern the deployment of high-risk AI systems; complete impact assessments for the high-risk AI systems; send notice to consumers after deploying high-risk AI systems to make, or be a substantial factor in making, consequential decisions concerning a consumers; and submit notice to the AG within 90 days of discovery that the high-risk AI system has caused algorithmic discrimination.

Health Care Services Scope and Exceptions

The Act defines “health care services” by referring to the Public Health Service Act definition.[2] Though this is a broad definition that could encompass a wide range of services, drafters also accounted for systems that are not high-risk and some of the work that has already been done or is in process by the federal government, as there are exceptions applicable to certain health care entities.

HIPAA Covered Entities

The Act will not apply to deployers, developers, or others that are Covered Entities under HIPAA and are providing health care recommendations that: (i) are generated by an AI System; (ii) require a health care provider to take action to implement the recommendations; and (iii) are not considered to be high-risk (as defined by the Act). This exception appears to be geared toward health care providers since it requires the involvement of a health care provider to actually implement the recommendations made by the AI Systems rather than the recommendations being implemented automatically by the systems. However, the scope is not limited to only providers, as Covered Entities can be health care providers, health plans, or health care clearinghouses. There are a range of possible uses of AI systems by HIPAA Covered Entities, including but not limited to disease diagnoses, treatment planning, clinical outcome predictions, coverage determinations, diagnostics and imaging, clinical research, and population health management. Depending on the circumstances, many of these uses could be considered "high risk". Examples of uses of AI Systems that are not “high risk” in relation to health care services, and could thus potentially meet this exception, include administrative-type tasks such as clinical documentation and note-taking, billing, or appointment scheduling.

FDA-Approved Systems

Deployers, developers, and others that deploy, develop, put into service, or substantially modify, high-risk AI systems that have been approved, authorized, certified, cleared, developed, or granted by a federal agency such as the Food & Drug Administration (FDA) are not required to comply with the Act. Since FDA has deep experience with AI and machine learning (ML) and, as of May 13, 2024, has authorized 882 AI/ML-enabled medical devices, this is an expected and welcome clarification for those entities who have already developed or are working with FDA-authorized AI/ML-enabled medical devices. Additionally, deployers, developers, or others conducting research to support an application for approval or certification from a federal agency such as the FDA or research to support an application otherwise subject to review by the agency are not required to comply with the Act. Use of AI Systems is prevalent in drug development and to the extent those activities are approved by the FDA, development and deployment of AI Systems under those approvals are not subject to the Act.

Compliance with ONC Standards

Also exempted from the Act’s requirements are deployers, developers, or others that deploy, develop, put into service, or intentionally and substantially modify a high-risk AI system that is in compliance with standards established by federal agencies such as the Office of the National Coordinator for Health Information Technology (ONC). This exemption helps to avoid possible regulatory uncertainty for certified health IT developers, and health care providers using certified health IT, in compliance with ONC’s HTI-1 Final Rule, which imposes certain information disclosure and risk management obligations onto developers of certified health IT. Not all developers of high-risk AI systems in health care are developers of certified health IT, but the vast majority are certified and this is an important carve-out for those developers already in compliance with, or working to comply with the HTI-1 Final Rule.

Key Takeaways

Using a risk-based approach for review of AI System usage may be a newer practice for developers and deployers directly or indirectly involved with the provision of health care services. For deployers in particular, they will want to have processes in place to determine whether they are required to comply with the Act and to document the results of any applicable analyses. These analyses will involve determinations of whether their AI System serves as a substantial factor in making consequential decisions (and thus the system is "high-risk") in relation provision of health care services. If they determine that they are using high-risk AI Systems and none of the aforementioned exceptions above are applicable, they will need to begin activities such as developing the required risk management policies and procedures, conducting impact assessments for these systems, and setting up consumer and AG notification mechanisms. It will likely take some time for some organizations to integrate these new obligations into their respective policies and procedures and risk management systems and they will want to make sure they are including the right individuals for those conversations and decisions.

[1] Algorithmic discrimination is defined by the Act as “any condition in which the use of an AI System results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status” or other classification protected under Colorado law or federal law.

[2] The PHSA defines health care services as “any services provided by a health care professional, or by any individual working under the supervision of a health care professional, that relate to—(A) the diagnosis, prevention, or treatment of any human disease or impairment; or (B) the assessment or care of the health of human beings.”

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins