California Governor Newsom signed Senate Bill 1120 into law, which is known as the Physicians Make Decisions Act. At a high level, the Act aims to safeguard patient access to treatments by mandating a certain level of health care provider oversight when payors use AI to assess the medical necessity of requested medical services, and by extension, coverage for such medical services.
Generally, health plans use a process known as utilization management, pursuant to which plans review requests for services (also known as prior authorization requests) in an effort to limit utilization of insurance benefits to services which are medically necessary and to avoid costs for unnecessary treatments. Increasingly, health plans are relying on AI to streamline internal operations, including to automate review of prior authorization requests. In particular, AI has demonstrated some promise of reducing costs as well as in addressing lag times in responding to prior authorization requests. Despite such promise, use of AI has also raised challenges, such as concerns about AI producing results which are inaccurate, biased, or which ultimately result in wrongful denials of claims. Many of these concerns are based on questions of oversight, and that is precisely what the Act aims to address.
As a starting point, the Act applies to health care service plans and entities with which plans contract for services that include utilization review or utilization management functions (“Regulated Parties”). For purposes of the Act, a “health care service plan” includes health plans which are licensed by the California Department of Managed Health Care (“DMHC”). Significantly, the Act incorporates a number of specific requirements which are applicable to the use of an AI tool that has utilization review or utilization management functions by Regulated Parties, including most significantly:
- The AI tool must base decisions as to medical necessity on:
- The enrollee’s medical or other clinical history;
- The enrollee’s clinical circumstances, as presented by the requesting provider; and
- Other relevant clinical information contained in the enrollee’s medical or other clinical record.
- The AI tool cannot make a decision solely based on a group dataset.
- The AI tool cannot “supplant health care provider decision making”..
- The AI tool may not discriminate, directly or indirectly, against enrollees in a manner which violates federal or state law.
- The AI tool must be fairly and equitably applied.
- The AI tool, including specifically its algorithm, must be open to inspection for audit or compliance by the DMHC.
- Outcomes derived from use of an AI tool must be periodically reviewed and assessed to ensure compliance with the Act as well as to ensure accuracy and reliability.
- The AI tool must limit its use of patient data to be consistent with California’s Confidentiality of Medical Information Act as well as HIPAA.
- The AI tool cannot directly or indirectly cause harm to enrollees.
Further, a Regulated Party must include disclosures pertaining to the use and oversight of the AI in its written policies and procedures that establish the process by which it reviews and approves, modifies, delays, or denies, based in whole or in part on medical necessity, requests by providers of health care services for plan enrollees.
Most significantly, the Act provides that a determination of medical necessity must be made only by a licensed physician or a licensed health care professional who is competent to evaluate the specific clinical issues involved in the health care services requested by the provider. In other words, the buck stops with the provider, and AI cannot replace the provider’s role.
The Act is likely just the tip of the spear in terms of AI-related regulation which will develop in the healthcare space. This is particularly true as use of AI can have tremendous real-life consequences. For example, if an AI tool causes incorrect results in utilization management activities which result in inappropriate denials of benefits, patients may not have access to coverage for medically necessary services and may suffer adverse health consequences. Similarly, disputes between health plans and providers can arise where providers believe that health plans have inappropriately denied coverage for claims, which can be particularly problematic where an AI tool has adopted a pattern of decision-making which impacted a larger number of claims. All of the foregoing could have tremendous impacts on patients, providers, and health plans.
We encourage Regulated Parties to take steps to ensure compliance with the Act.