In December 2023, the National Association of Insurance Commissioners (NAIC) adopted a Model Bulletin on the Use of Artificial Intelligence (AI) Systems by Insurers. The model bulletin reminds insurance carriers that they must comply with all applicable insurance laws and regulations (e.g., prohibitions against unfair trade practices) when making decisions that impact consumers, including when those decisions are made or supported by advanced technologies, such as AI systems. The model bulletin was issued by the NAIC’s Innovation, Cybersecurity, and Technology Committee, which includes insurance regulator representatives from 15 states and is tasked with discussing how technological developments may impact consumer protections and insurance oversight. While not a model law or regulation, the model bulletin is intended to serve as a guiding document for state insurance regulators to promote uniformity in state regulatory frameworks. The NAIC’s model applies only to the extent that a state adopts the bulletin.
IN DEPTH
MODEL BULLETIN PRINCIPLES AND STATE ADOPTION
To date, almost a dozen states have adopted the model bulletin with minor customization, and many other states likely will adopt similar standards regarding the use of AI in health insurance.
State | Date Adopted |
---|---|
Alaska | February 1, 2024 |
Connecticut | February 26, 2024 |
Illinois | March 13, 2024 |
Kentucky | April 16, 2024 |
Maryland | April 22, 2024 |
Nevada | February 23, 2024 |
New Hampshire | February 20, 2024 |
Pennsylvania | April 6, 2024 |
Rhode Island | March 15, 2024 |
Vermont | March 12, 2024 |
Washington | April 22, 2024 |
The state bulletins are based on the key principles outlined in NAIC’s model, including:
Insurers should maintain a written program for the use of AI systems.
The model bulletin establishes that insurers should implement and maintain a written program to ensure that AI systems are used responsibly. This includes mitigating adverse consumer outcomes and using verification and testing methods to identify potential biases in the use of AI systems and other predictive models. The responsibility for oversight of the program should rest with senior management or a committee that is accountable to the insurer’s board.
The governance framework should be driven by transparency, fairness and accountability.
Insurers are expected to implement a governance framework overseeing the AI system, including policies and procedures, risk management and internal controls, documentation requirements and similar methods of oversight to be used at each stage of the AI system’s cycle. The governance structure should include committees of appropriate disciplines (e.g., actuarial, data science, underwriting, legal); clearly defined chains of command; monitoring, auditing and reporting protocols; and ongoing training and supervision requirements for personnel.
Risk management and internal controls should be documented.
An insurer’s AI system should document the insurer’s risk management and internal control framework, including how AI systems are approved and adopted. This extends to management and oversight of predictive models, including the expectation that insurers maintain descriptions of such models, document their use, and conduct regular audits and assessments of the tools. Insurers should validate, test and retest AI system outputs, as necessary. Insurers are expected to protect nonpublic information (i.e., consumer information) from unauthorized access.
Insurers are accountable for third-party vendor management.
The model bulletin requires insurers to develop clear processes for using or acquiring AI-related systems developed by third parties. This includes protocols for assessing the third party’s system to ensure that decisions made or supported by such systems meet the legal standards imposed on the insurer. The model bulletin also encourages insurers to consider including terms in their contracts with third parties that provide audit rights and require third parties to cooperate with the insurer on state regulatory inquiries.
Regulators may ask about an insurer’s use and development of AI.
The model bulletin indicates that in the event of a state investigation or market conduct action, an insurer may be asked about its development and use of AI, including the insurer’s governance framework, risk management and internal controls. This could include a request for policies, procedures, training materials and other information relating to the insurer’s implementation, monitoring and oversight of AI systems.
OTHER STATE ACTIVITY
While almost a dozen states have adopted the NAIC’s model bulletin, other states are taking different approaches to regulate the use of AI in insurance. For example:
- In 2022, California’s insurance commissioner issued a bulletin noting that it is aware of and investigating instances of potential bias and alleged unfair discrimination across lines of insurance resulting from the use of technology and data. The bulletin expressed concern about the irresponsible use of “Big Data” and encouraged insurers to review their practices.
- Colorado established governance and risk management framework requirements for life insurers regarding the use of external consumer data, algorithms, predictive models and similar systems. The regulation aims to prevent unfair discrimination and includes documentation, management and reporting requirements. Colorado also recently enacted the nation’s first comprehensive framework to govern the development and use of AI, which goes into effect on February 1, 2026. Importantly, the law includes an exemption for insurers, fraternal benefit societies and developers of AI systems used by insurers that are subject to C.R.S. § 10-3-1104.9 and the rules adopted by Colorado Insurance Commissioner Michael Conway.
- In 2019, New York issued a circular letter focused on the use of external consumer data in underwriting for life insurance. More recently, in 2024, New York proposed a circular letter focused on underwriting and pricing, and the risk of adverse effects from the use of AI. The proposed 2024 circular letter states that while AI systems can benefit both insurers and consumers by simplifying pricing and underwriting processes, the systems can also exacerbate inequalities. The letter outlines the state’s expectation that insurers establish proper governance and risk management frameworks to mitigate potential harms.
- In 2020, Texas issued a bulletin reminding regulated entities – including their agents and representatives – that they are responsible for the accuracy of data used in claims handling, underwriting and rating practices.
KEY TAKEAWAY
As AI systems are increasingly used in all facets of the insurance industry – from product development to sales, pricing, claims management and more – AI regulation in the health insurance sector is likely to grow. Health insurers should be mindful of these developments and the different approaches states may take to overseeing and investigating insurers’ use of AI. While many states have taken up the NAIC’s model, other states are implementing state-specific requirements, which could result in a patchwork of AI standards. Health insurers operating across multiple states should be aware of the different standards implemented and should maintain compliance as these expectations evolve.