HB Ad Slot
HB Mobile Ad Slot
AI in Employer-Sponsored Group Health Plans: Legal, Ethical, and Fiduciary Considerations
Tuesday, September 16, 2025

The ubiquity of artificial intelligence (AI) extends to tools used by, and on behalf of, employer-sponsored group health plans. These AI tools raise no shortage of concerns. In this article, we analyze key issues that stand out as requiring immediate attention by plan sponsors and plan administrators alike.

In Depth

AI background

On November 30, 2022, OpenAI released ChatGPT for public use. Immediately a consensus emerged that something fundamental had changed: For the first time an AI with apparent human – or at least near-human – intelligence became widely available. The release of an AI into the proverbial wild was not welcomed in all quarters. An open letter published in March 2023 by the Future of Life Institute worried that “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” Three years later, that concern may seem overstated. AI systems are now ubiquitous; the world has changed in ways that are so fundamental as to be unfathomable; and the true impact of AI may take decades to fully grasp.

One of the many domains that AI will revolutionize is the workplace, which our colleagues, Marjorie Soto Garcia, Brian Casillas, and David P. Saunders, have previously addressed (see their presentation, “AI in the Workplace: How State Laws Impact Employers,” which considers the use of AI in human resources and workplace management processes). Marjorie and Brian had previously defined the various types of AI and their deployment in their article “Risk Management in the Modern Era of Workplace Generative AI,” which we recommend reading for helpful context. In this client alert, we examine an even narrower, though nevertheless critically important, subset of AI in the workplace: how AI affects employer-sponsored group health plans.

Employer-sponsored group health plans are a central feature of the US healthcare landscape, covering more than 150 million Americans. These plans exist at the intersection of healthcare delivery, insurance risk pooling, and employment law. AI has emerged as a transformative technology in health administration. AI-enabled tools promise improved efficiency, cost savings, better clinical outcomes, and streamlined administrative processes. Yet, their adoption in group health plans raises complex regulatory, fiduciary, and ethical questions, particularly under the Employee Retirement Income Security Act of 1974 (ERISA), which governs virtually all workplace employee benefit plans.

The use of AI tools by, and on behalf of, employer-sponsored group health plans raises no shortage of concerns. There are, however, three issues that require immediate attention by plan sponsors and plan administrators alike: claims adjudication, fiduciary oversight, and vendor contracts.

Claims adjudication: Autonomous decision-making

AI technology – defined as machine-based systems such as algorithms, machine learning models, and large language models – is increasingly used by insurers and third-party administrators (TPAs) to assess clinical claims on behalf of health plan participants.

AI systems are likely to be used to make basic eligibility determinations, which are based on plan terms and relevant information about the employee and their beneficiaries. More substantively, AI systems may also be used to make clinical determinations. These systems could, for example, evaluate whether a particular treatment or service is deemed medically necessary based on training data and preprogrammed rules. For this purpose, an AI system might scan diagnostic codes, patient histories, and treatment guidelines to determine whether a claim aligns with standard clinical practice. AI is already being used to ease internal administrative functions for carriers and claims administrators, and its use is expected to expand significantly.

There is a separate though equally important question of AI use in preauthorization. AI tools have the potential to improve this process by automating prior authorization requests, predicting approval likelihood, and flagging cases that require expedited human review. For example, AI could match a request for MRI imaging against clinical guidelines, patient history, and plan terms to recommend approval within seconds.

If an AI tool is used for both initial claims adjudication and preauthorization, this poses some fundamental concerns relating to training data. There are currently three federal rules governing machine-readable files, the purpose of which is to make claims data widely available:

  • The hospital price transparency rule, requiring hospitals to disclose items and services, including the hospital standard (gross) charge, discounted cash price, and the minimum and maximum charge that a hospital has negotiated with a third-party payer.
  • The Transparency in Coverage final rule, requiring health plans to disclose in-network negotiated rates and historical out-of-network billed charges and allowed amounts.
  • The transparency rules under the Consolidated Appropriations Act, 2021, imposing additional disclosure obligations on plans and issuers.

All three rules embed decades of provider and carrier practices that transparency was designed to expose. If these datasets are used to train AI tools, it could lead to a number of problems, including, for example, inherent biases and systemic flaws being baked in.

Fiduciary oversight

For fiduciaries of group health plans, any use of AI technology presents two fundamental problems. First, AI models are by their nature black boxes. And, second, there are limits to AI competence and accuracy that cannot be eliminated.

The black-box nature of AI technology poses a significant problem for fiduciaries of group health plans. The opacity of the models makes ERISA-required monitoring and oversight extremely challenging. Robust third-party standards are needed to establish measurement science and interoperable metrics for AI systems. Further, independent certification of vendor AI systems would help fiduciaries meet ERISA standards. Ideally, the US Department of Labor (DOL) would issue guidance as it has done in related contexts, such as with cybersecurity threats involving ERISA-covered pension and later welfare plans.

Questions about AI competence and accuracy are equally daunting to plan fiduciaries. The current crop of AI models relies heavily on a process of back propagation to continuously refine reliability. Even the most advanced AI models achieve, at best, an asymptotic approach to reliability. This raises a threshold legal question: whether and to what extent fiduciaries can prudently rely on AI technology. The practical answer is that they likely already do, sometimes unknowingly, which means that some validation of AI models is essential for a fiduciary to ensure they can meet ERISA’s requirements.

According to the DOL, ERISA fiduciaries must ensure that plan decisions are made “solely in the interest of participants and beneficiaries” with prudence and diligence. Delegating critical claim denials to opaque AI models risks violating this duty. At minimum, AI tools should be limited to serving in a decision-support role, with final authority resting in human hands.

Vendor contracts

The generic reference to an ERISA-covered group health plan using AI masks an important reality: AI will typically be used on the plan’s behalf by TPAs and insurers and less often directly by the plan itself. For most large, multistate self-funded plans, this means large, national carriers acting in an administrative services only (ASO) capacity.

ASO providers have significant leverage in negotiating contract terms. A plan delegating claims adjudication without inquiring whether and how that third party uses AI in claims adjudication risks violating fiduciary standards and facing scrutiny.

Fiduciaries should work to understand and evaluate the risk that delegating plan functions to third parties poses and take appropriate steps in response. At a minimum, this would necessitate a group health plan insisting on AI-related contract provisions. For example, the group health plan should insist that:

  • Any claim denials must be reviewed and ultimately decided by a human clinician.
  • The ASO must disclose how AI is tested and audited.
  • Performance guarantees and indemnification provisions should cover AI-related failures.

Given the infancy of AI in group health plan administration, ASOs are expected to resist comprehensive AI provisions, highlighting the need for federal guidance and clearer market standards.

Legal landscape

State law considerations

The most prominent example of a state attempting to establish rules governing the use of AI for group health plans is California’s Physicians Make Decisions Act. Effective January 1, 2025, it prohibits the use of AI by insurance companies in certain instances of claim and utilization review for health plans and prohibits health insurers from solely relying on AI to deny any claim based specifically on medical necessity. This act is based on California Senate Bill 1120, which requires, at a high level, that medical professionals rather than AI make any determination as to whether a treatment is medically necessary.

The act and similar laws being considered in other states, such as Arizona, Maryland, Nebraska, and Texas, highlight the need for group health plans to be aware of and ensure state law compliance, particularly as they review and negotiate ASO contracts.

Federal agency guidance

The DOL and the US Department of the Treasury have issued limited, indirect guidance on the use of AI. This guidance illustrates how the government might handle AI issues in the benefit plan context.

A withdrawn DOL bulletin (Field Assistance Bulletin 2024-1) discussed the risks associated with using AI under the Family and Medical Leave Act. In the bulletin, the DOL generally warned that “without responsible human oversight, the use of such technologies may pose potential compliance challenges.” It noted that even though similar violations may occur under human decision-making processes, there is a higher risk that violations made when using AI would apply across the entirety of the task at hand or workforce.

The Treasury department has studied AI in the financial services sector in a report, noting that while there are various benefits, the use of AI can raise concerns over bias, explainability, and data privacy, concerns that apply equally in health plan administration.

Both agencies emphasized the need for human oversight, echoing ERISA fiduciary obligations.

AI Disclosure Act

Although not law, the proposed AI Disclosure Act of 2023 would require disclaimers on AI-generated content. If a version of this bill is passed, it would further require companies to disclose certain information about the use of AI. Even absent a mandate, plan sponsors may wish to consider voluntary disclosures when AI tools are used in communications or claims processes.

 

Next steps and action items

While AI technology holds immense promise for employer-sponsored group health plans, its deployment carries significant fiduciary, ethical, and regulatory challenges. Plans must still meet their legal obligations under ERISA and other applicable laws and should ensure that AI does not autonomously make clinical or claims decisions, that preauthorization systems are transparent and fair, that data biases are identified and corrected, and that fiduciary oversight mechanisms are robust. Fiduciaries must balance innovation with prudence. They must ensure that AI is a tool to enhance – not replace – human judgment. Regardless, AI use likely increases fiduciary oversight obligations and necessitates ongoing monitoring.

In light of the proliferation of AI and its impact on employee benefit plans, practical measures that plan sponsors should consider may include:

  • Inventory AI use: Identify where AI is being used by the plan and by TPAs, insurers, and other vendors in claims adjudication, preauthorization, and participant communications.
  • Update fiduciary processes: Incorporate AI oversight into plan committee charters, meeting agendas, and monitoring practices.
  • Review contracts: In new and existing contracts, negotiate for disclosure, audit rights, performance guarantees, and human review of denials in situations where AI is involved.
  • Validate outputs: Request disclosure of vendor documentation on testing, error rates, and mitigation of bias and review documentation with plan consultants during regular fiduciary committee meetings.
  • Stay informed on state laws: Monitor state legislative activity (e.g., California) and update compliance procedures.
  • Plan fiduciaries: Provide education to committee members on AI risks, limitations, and monitoring obligations under ERISA.
  • Maintain document oversight: Keep records of inquiries, discussions, and decisions relating to AI to demonstrate prudence.

The use of AI in benefits is a complicated topic. 

HB Mobile Ad Slot
HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters