Utah is one of a handful of states that has been a leader in its regulation of AI. Utah’s Artificial Intelligence Policy Act[i] (“UAIPA”) was enacted in 2024 and requires disclosures relating to consumer interaction with generative AI with heightened requirements on regulated professions, including licensed healthcare professionals.
Utah recently passed three AI laws (HB 452, SB 226 and SB 332), all of which became effective on May 7, 2025, and either amend or expand the scope of the UAIPA. The laws govern the use of mental health chatbots, revise disclosure requirements for the deployment of generative AI in connection with a consumer transaction or provision of regulated services, and extend the repeal date of the UAIPA.
HB 452
HB 452 creates disclosure requirements, advertising restrictions, and privacy protections for the use of mental health chatbots. [ii] “Mental health chatbots” refer to AI technology that (1) uses generative AI to engage in conversations with a user of the mental health chatbot, similar to communications one would have with a licensed therapist, and (2) a supplier represents, or a reasonable person would believe, can provide mental health therapy or help manage or treat mental health conditions. “Mental health chatbots” do not include AI-technology that only provides scripted output (such as guided meditations or mindfulness exercises).
Disclosure Requirements
A mental health chatbot must clearly and conspicuously disclose that the mental health chatbot is an AI technology and not human. The disclosure must be made (1) before the user accesses features of the mental health chatbot, (2) at the beginning of any interaction with the user, if the user has not accessed the mental health chatbot within the previous 7 days, and (3) if asked or prompted by the user whether AI is being used.
Personal Information Protections
Mental health chatbot suppliers may not sell or share with any third party the individually identifiable health information (“IIHI”) or user input of a user. The prohibition does not apply to IIHI that (1) a health care provider requests with the user’s consent, (2) is provided to a health plan upon the request of the user, or (3) is shared by the supplier as a covered entity to a business associate to ensure effective functionality of the mental health chatbot and in compliance with the HIPAA Privacy and Security Rules.
Advertising Restrictions
A mental health chatbot cannot be used to advertise a specific product or service to a user in a conversation between the user and the mental health chatbot, unless the mental health chatbot clearly and conspicuously (1) identifies the advertisement as an advertisement and (2) discloses any sponsorship, business affiliation or agreement with a third party to promote or advertise the product or service. Suppliers of mental health chatbots may not use a user’s input to (1) determine whether to display advertisements to the user unless the advertisement is for the mental health chatbot itself, (2) customize how advertisements are presented, or (3) determine a product, service or category to advertise to the user.
Affirmative Defense
HB 452 establishes an affirmative defense to violations of the law which requires, among other items, creating, maintaining and implementing a policy for the mental health chatbot that meets specific requirements outlined in the law and filing such policy with the Utah Division of Consumer Protection.
Penalties
Violation of the law may result in administrative fines up to $2,500 per violation and court action by the Utah Division of Consumer Protection.
SB 226
SB 226 pares back UAIPA’s disclosure requirements applicable to a supplier that uses generative AI in a consumer transaction to when (1) there is a “clear and unambiguous” request from an individual to determine whether an interaction is with AI, rather than any request, and (2) an individual interacts with generative AI in the course of receiving regulated services that constitute a “high-risk” AI interaction, instead of any generative AI interaction in the provision of regulated services.[iii]
Disclosure Requirements
If an individual asks or prompts a supplier about whether AI is being used, a supplier that uses generative AI to interact with an individual in connection with a consumer transaction must disclose that the individual is interacting with generative AI and not a human. While this requirement also existed under the UAIPA, SB 226 clarifies that disclosure is only required when the individual’s prompt or question is a “clear and unambiguous request” to determine whether an interaction is with a human or AI.
The UAIPA also requires persons who provide services of a regulated occupation to prominently disclose when a person is interacting with generative AI in the provision of regulated services, regardless of whether the person inquires if they are interacting with generative AI. Under SB 226, such disclosure is only required if the use of generative AI constitutes a “high-risk artificial intelligence interaction.” The disclosure must be provided verbally at the start of a verbal conversation and in writing before the start of a written interaction. “Regulated occupation” means an occupation that is regulated by the Utah Department of Commerce and requires a license or state certification to practice the occupation, such as nursing, medicine, and pharmacy. “High-risk AI interaction” includes an interaction with generative AI that involves (1) the collection of sensitive personal information, such as health or biometric data and (2) the provision of personalized recommendations, advice, or information that could reasonably be relied upon to make significant personal decisions, including the provision of medical or mental health advice or services.
Safe Harbor
A person is not subject to an enforcement action for violation of the required disclosure requirements if the person’s generative AI clearly and conspicuously discloses at the outset of and throughout an interaction in connection with a consumer transaction or the provision of regulated services that it is (1) generative AI, (2) not human, or (3) an AI assistant.
Penalties
Violation of the law may result in administrative fines up to $2,500 per violation and a court action by the Utah Division of Consumer Protection.
SB 332
SB 332 extended the repeal date of the UAIPA from May 1, 2025 to July 1, 2027.[iv]
Looking Forward
Companies that offer mental health chatbots or generative AI in interactions with individuals in Utah should evaluate their products and processes to ensure compliance with the law. Furthermore, the AI regulatory landscape at the state level is rapidly changing as states attempt to govern the use of AI in an increasingly deregulatory federal environment. Healthcare companies developing and deploying AI should monitor state developments.
FOOTNOTES
[i] S.B. 149 (“Utah Artificial Intelligence Policy Act”), 65th Leg., 2024 Gen. Session (Utah 2024), available here.
[ii] H.B. 452, 66th Leg., 2025 Gen. Session (Utah 2025), available here.
[iii] S.B. 226, 66th Leg., 2025 Gen. Session (Utah 2025), available here.
[iv] S.B. 332, 66th Leg., 2025 Gen. Session (Utah 2025), available here.