HB Ad Slot
HB Mobile Ad Slot
Two New AI Laws, Two Different Directions (For Now)
Friday, May 9, 2025

Key Takeaways

  • Colorado legislature rejects amendments to the Colorado AI Act (CAIA).
  • Proposed amendments sought multiple exemptions, exceptions and clarifications.
  • Utah legislature enacts amendments that include a safe harbor for mental health chatbots.
  • Utah’s safe harbor provision includes a written policy and procedures framework.

In Colorado last week, highly anticipated amendments to its AI Act were submitted to the legislature. But, in a surprising turn of events this week, every single one of the proposed amendments was rejected, setting the stage for a sprint to February 1, 2026, the effective date of Colorado’s first-of-its-kind law impacting how AI is to be used with consumers.

Meanwhile, in Utah, which enacted an AI law last year that increases consumer protection but also encourages responsible innovation, amendments to its AI Policy Act (UAIP) took effect this week. The amendments are due in part to guidance found in the Best Practices for the Use of AI by Mental Health Therapists, published in April by Utah’s Office of AI Policy (OAIP).

We recently highlighted how a global arms race may mean U.S. states are best positioned to regulate AI risks, as evidenced by Colorado and Utah’s approaches, and how other states are emphasizing existing laws they say “have roles to play.” While there is still a lot of uncertainty, the outcome of the amendments in Colorado and Utah is instructive.

Colorado’s Rejected Amendments

A lot can be learned by what was rejected in Colorado this week, especially as other states, such as Connecticut, are considering adopting their own versions of an AI law for consumer protection, and as those that have already rejected such laws, such as Virginia, prepare to reconsider newer versions with wider input.

In some ways, it is not surprising that the amendments were rejected. They included opposing input from the technology sector and consumer advocates.1 This included technical changes such as exempting specified technologies from the definition of “high risk” and creating an exception for developers that disclose system model weights (e.g., parameters, biases).

The amendments also included non-technical changes, such as eliminating the duty of a developer or deployer of a high-risk AI system to use reasonable care to protect consumers. This was always going to be untenable. But there were others that made sense, such as providing exemptions for systems below certain investment or revenue thresholds ($10 and $5 million), which is why it is surprising that all the amendments were rejected, including an amendment that would have delayed the attorney general’s authority to enforce CAIA violations until 2027. Given the scope of the proposed amendments that have now been considered and rejected, it appears extraordinary circumstances would be needed for CAIA’s effective date to be delayed.

Utah’s AI Amendments

As previously noted, the UAIP endeavors to enable innovation through a regulatory sandbox for responsible AI development, regulatory mitigation agreements, and policy and rulemaking by the OAIP. Recently, the OAIP released findings and guidance for the mental health industry that were adopted by the legislature as amendments to the Act.

The guidance comprises 54 pages, the first 40 of which describe potential benefits and important risks associated with AI. It then examines use cases of AI in mental health therapy, especially in relation to inaccurate AI outputs, and sets forth best practices across these categories:

  1. Informed consent;
  2. Disclosure;
  3. Data privacy and safety;
  4. Competence;
  5. Patient needs; and
  6. Continuous monitoring and reassessment.

Emphasis is placed on competence. For example, the guidance states that therapists must maintain a high level of competence, which “involves continuous education and training to understand these AI technologies’ capabilities, limitations, and proper use.” This is consistent with how the UAIP specifies that businesses cannot blame AI for errors and violations.2

The guidance further states that mental health therapists should know “how frequently and under what circumstances one should expect the AI tool to produce inaccurate or undesirable outputs,” thus seeming to create a duty of care not only for AI system developers and deployers but also users. The guidance refers to these as “digital literacy” requirements.

Also, through its emphasis on continuous monitoring and reassessment, the guidance states that therapists, “to the best of their abilities,” should regularly and critically challenge AI outputs for inaccuracies and biases and intervene promptly if the AI produces incorrect, incomplete or inappropriate content or recommendations.

Based on the guidance, House Bill 452 was enacted and includes provisions relating to the regulation of mental health chatbots that use AI technologies, including the protection of personal information, restrictions on advertising, disclosure requirements and the remedies available for enforcement by the attorney general.

House Bill 452 includes an affirmative defense provision for mental health chatbots. In other words, a safe harbor from litigation initiated due to alleged harm caused by a mental health chatbot. To qualify for safe harbor protection, the supplier must develop a written policy that states the purpose of the chatbot and its abilities and limits.

In addition, the supplier must implement procedures that ensure mental health therapists are involved in the development of a review process and that the chatbot is developed and monitored consistent with clinical best practices, is tested to ensure that there is no greater risk to a user than there would be with a therapist and about ten other requirements.

As early best practices, the guidance may become industry standards that establish legal duties that can inform the risk management policies and programs contemplated by new laws and regulations, such as CAIA and UAIP. If so, these can form the basis for enforceable contract provisions.

Final Thoughts

We have previously provided recommendations that individuals and organizations should consider to mitigate risks associated with AI, both holistic and specific, emphasizing data collection practices. But, as shown through the rejected amendments in Colorado and the enacted AI amendments in Utah, AI literacy might be the most essential requirement. 


[1] For an insightful description of how the amendments died, see journalist Tamara Chuang’s excellent reporting here https://coloradosun.com/2025/05/05/colorado-artificial-intelligence-law-killed/#

[2] Utah Code. Ann. section 13-2-12 (2).

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot

More from Polsinelli PC

HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters