The question is whether health insurers are using their AI predictive tools properly, in accordance with state and federal law, or whether they are being deployed solely as cost-saving measures to improperly deny patients and health care providers coverage and payment for medically necessary services.
Two recent class actions alleged that two of the nation’s largest health insurers, Cigna and United Healthcare (UHC), have crossed the line by integrating AI predictive tools into their systems to automate claim denials for medical necessity, improperly denying patients health care coverage for medical services and overriding the medical determinations of their doctors.
On July 24, a group of patients sued Cigna in the United States District Court for the Eastern District of California, alleging that the insurance company improperly used an AI algorithm known as PXDX to automatically deny insured beneficiaries’ claims and short-circuit physician review of those same claims as mandated by California state law. The plaintiffs accused Cigna of relying on the AI algorithm to enable its own doctors to automatically deny thousands of claims at a time for treatments that did not match certain preset criteria without actual physician review of the medical records. In doing so, the plaintiffs allege that Cigna eliminated the legally required individual physician review process for medical necessity and breached its duties to its covered beneficiaries by failing to ensure that they received the benefits required under their Cigna policies. The lawsuit is based, in part, on the March 25 Pro Publica article titled “How Cigna Saves Millions by Having Its Doctors Reject Claims Without Reading Them.”
On November 14, family members of two deceased UHC beneficiaries filed a class action lawsuit in the United States District Court for the District of Minnesota, challenging the insurer’s use of an AI algorithm to make claim determinations. The plaintiffs allege that UHC improperly used the nH Predict AI Model to deny extended care claims for elderly patients based on erroneous health care determinations generated by the algorithm. The plaintiffs also accused UHC of using the AI tool to override the determinations of medical professionals, including ones employed by the insurer. Some of the allegations are based on the March 13, STAT article titled “Denied by AI: How Medicare Advantage Plans Use Algorithms to Cut Off Care for Seniors in Need.”
Both cases are still relatively early on procedurally, and we expect both health insurers to defend their conduct vigorously. Regardless of the ultimate outcome reached in either case, one thing is clear – given the recent technological advancements, AI predictive tools and algorithms are here to stay. We should expect health insurers to continue to deploy them in the claims adjudication, payment, and appeal process. The question is whether the technology will be used appropriately or as a tool to withhold medical benefits due to patients and payments owed to health care providers.