HB Ad Slot
HB Mobile Ad Slot
Bias in Healthcare Algorithms
Friday, February 12, 2021

The application of artificial intelligence technologies to health care delivery, coding and population management may profoundly alter the manner in which clinicians and others interact with patients, and seek reimbursement. While on one hand, AI may promote better treatment decisions and streamline onerous coding and claims submission, there are risks associated with unintended bias that may be lurking in the algorithms. AI is trained on data. To the extent that data encodes historical bias, that bias may cause unintended errors when applied to new patients. This can result in errors in utilization management, coding, billing and healthcare delivery.

The following hypothetical illustrates the problem.

A physician practice management service organization (MSO) adopts a third-party software tool to assist its personnel in make treatment decisions for both the fee-for-service population and a Medicare Advantage population for which the MSO is at financial risk. The tool is used for both pre-authorizations and ICD diagnostic coding for Medicare Advantage patients, without the need of human coders. 

 The MSO’s compliance officer observes two issues:

  1. It appears Native American patients seeking substance abuse treatment are being approved by the MSO’s team far more frequently than other cohorts who are seeking the same care, and

  2. Since the deployment of the software, the MSO is realizing increased risk adjustment revenue attributable to a significant increase in rheumatic condition codes being identified by the AI tool.

Though the compliance officer doesn’t have any independent studies to support it, she is comfortable that the program is making appropriate substance abuse treatment and utilization management recommendations because she believes that there may be a genetic reason why Native Americans are at greater risk than others. With regard to the diagnostic coding, she:

  1. is also comfortable with the vendor’s assurances that their software is more accurate than eyes-on coding;

  2. understands that prevalence data suggests that the elderly population in the United States likely has undiagnosed rheumatic conditions; and,

  3. finds through her own investigation that anecdotally it appears that the software, while perhaps over-inclusive, is catching some diagnoses that could have been missed by the clinician alone. 

 Is the compliance officer’s comfort warranted?

The short answer is, of course, no.

There are two fundamental issues that the compliance officer needs to identify and investigate – both related to possible bias. First, is the tool authorizing unnecessary substance use disorder treatments for Native Americans, (overutilization) and at the same time not approving medically necessary treatments for other ethnicities (underutilization)? Overutilization drives health spend and can result in payment errors, and underutilization can result in improper denials, patient harm and legal exposure. The second issue relates to the AI tool potentially “finding” diagnostic codes that, while statistically supportable based on population data the vendor used in the training set, might not be supported in the MSO’s population. This error can result in submission of unsupported codes that can drive risk adjustment payment, which can carry significant legal and financial exposure.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins