Fiduciaries should be aware of recent developments involving AI, including emerging and recent state law changes, increased state and federal government interest in regulating AI, and the role of AI in ERISA litigation. While much focus has been on AI’s impact on retirement plans, which we previously discussed here, plan fiduciaries of all types, including health and welfare benefit plans, must also stay informed about recent AI developments.
Recent State Law Changes
Numerous states recently codified new laws focusing on AI, some of which regulate employers’ human resource decision-making processes. Key examples include:
- California – In 2024, California enacted over 10 AI-related laws, addressing topics such as:
- The use of AI with datasets containing names, addresses, or biometric data;
- How one communicates health care information to patients using AI; and
- AI-driven decision-making in medical treatments and prior authorizations.
For additional information on California’s new AI laws, see Foley’s Client Alert, Decoding California’s Recent Flurry of AI Laws.
- Illinois – Illinois passed legislation prohibiting employers from using AI in employment activities in ways that lead to discriminatory effects, regardless of intent. Under the law, employers are required to provide notice to employees and applicants if they are going to use AI for any workplace-related purpose.
For additional information on Illinois’ new AI law, see Foley’s Client Alert, Illinois Enacts Legislation to Protect against Discriminatory Implications of AI in Employment Activities.
- Colorado – The Colorado Artificial Intelligence Act (CAIA), effective February 1, 2026, mandates “reasonable care” when employers use AI for certain applications.
For additional information on Colorado’s new AI law, see Foley’s Client Alert, Regulating Artificial Intelligence in Employment Decision-Making: What’s on the Horizon for 2025.
While these laws do not specifically target employee benefit plans, they reflect a trend toward states regulating human resource practices broadly, are aimed at regulating human resource decision-making processes, and are part of an evolving regulatory environment. Hundreds of additional state bills were proposed in 2024, along with AI-related executive orders, signaling more forthcoming regulation in 2025. Questions remain about how these laws intersect with employee benefit plans and whether federal ERISA preemption could apply to state attempts at regulation.
Recent Federal Government Actions
The federal government recently issued guidance aimed at preventing discrimination in the delivery of certain healthcare services and completed a request for information (RFI) for potential AI regulations involving the financial services industry.
- U.S. Department of Health and Human Services (HHS) Civil Rights AI Nondiscrimination Guidance – HHS, through its Office for Civil Rights (OCR), recently issued a “Dear Colleague” letter titled Ensuring Nondiscrimination Through the Use of Artificial Intelligence and Other Emerging Technologies. This guidance emphasizes the importance of ensuring that the use of AI and other decision-support tools in healthcare complies with federal nondiscrimination laws, particularly under Section 1557 of the Affordable Care Act (Section 1557).
Section 1557 prohibits discrimination on the basis of race, color, national origin, sex, age, or disability in health programs and activities receiving federal financial assistance. OCR’s guidance underscores that healthcare providers, health plans, and other covered entities cannot use AI tools in a way that results in discriminatory impacts on patients. This includes decisions related to diagnosis, treatment, and resource allocation. Employers and plan sponsors should note that this guidance applies to a subset of health plans, including those that fall under Section 1557, but not to all employer-sponsored health plans.
- Treasury Issues RFI for AI Regulation – In 2024, the U.S. Department of Treasury published an RFI on the Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector. The RFI included several key considerations, including addressing AI bias and discrimination, consumer protection and data privacy, and risks to third-party users of AI. While the RFI has not yet led to concrete regulations, it underscores federal attention to AI’s impact on financial and employee benefit services. The ERISA Industry Committee, a nonprofit association representing large U.S. employers in their capacity as employee benefit plan sponsors, commented that AI is already being used for retirement readiness applications, chatbots, portfolio management, trade executions, and wellness programs. Future regulations may target these and related areas.
AI-Powered ERISA Litigation
Potential ERISA claims against plan sponsors and fiduciaries are being identified using AI. In just one example, an AI platform, Darrow AI, claims to be:
“designed to simplify the analysis of large volumes of data from plan documents, regulatory filings, and court cases. Our technology pinpoints discrepancies, breaches of fiduciary duty, and other ERISA violations with accuracy. Utilizing our advanced analytics allows you to quickly identify potential claims, assess their financial impact, and build robust cases… you can effectively advocate for employees seeking justice regarding their retirement and health benefits.”
Further, this AI platform claims it can find violations affecting many types of employers, whether a small business or a large corporation, by analyzing diverse data sources, including news, SEC filings, social networks, academic papers, and other third-party sources.
Notably, health and welfare benefit plans are also emerging as areas of focus for AI-powered ERISA litigation. AI tools are used to analyze claims data, provider networks, and administrative decisions, potentially identifying discriminatory practices or inconsistencies in benefit determinations. For example, AI could highlight patterns of bias in prior authorizations or discrepancies in how mental health parity laws are applied.
The increasing sophistication of these tools raises the stakes for fiduciaries, as they must now consider the possibility that potential claimants will use AI to scrutinize their decisions and plan operations with unprecedented precision.
Next Steps for Fiduciaries
To navigate this evolving landscape, fiduciaries should take proactive steps to manage AI-related risks while leveraging the benefits of these technologies:
- Evaluate AI Tools: Undertake a formal evaluation of artificial intelligence tools utilized for plan administration, participant engagement, and compliance. This assessment includes an examination of the algorithms, data sources, and decision-making processes involved, including an assessment to ensure their products have been evaluated for compliance with nondiscrimination standards and do not inadvertently produce biased outcomes.
- Audit Service Providers: Conduct comprehensive audits of plan service providers to evaluate their use of AI. Request detailed disclosures regarding the AI systems in operation, focusing on how they mitigate bias, ensure data security, and comply with applicable regulations.
- Review and Update Policies: Formulate or revise internal policies and governance frameworks to monitor the utilization of AI in operational planning and compliance with nondiscrimination laws. These policies should outline guidelines pertaining to the adoption, monitoring, and compliance of AI technologies, thereby ensuring alignment with fiduciary responsibilities.
- Enhance Risk Mitigation:
- Fiduciary Liability Insurance: Consider obtaining or enhancing fiduciary liability insurance to address potential claims arising from the use of AI.
- Data Privacy and Security: Enhance data privacy and security measures to safeguard sensitive participant information processed by AI tools.
- Bias Mitigation: Establish procedures to regularly test and validate AI tools for bias, ensuring compliance with anti-discrimination laws.
- Integrate AI Considerations into Requests for Proposals (RFPs): When selecting vendors, include specific AI-related criteria in RFPs. This may require vendors to demonstrate or certify compliance with state and federal regulations and adhere to industry best practices for AI usage.
- Monitor Legal and Regulatory Developments: Stay informed about new state and federal AI regulations, along with the developing case law related to AI and ERISA litigation. Establish a process for routine legal reviews to assess how these developments impact plan operations.
- Provide Training: Educate fiduciaries, administrators, and relevant staff on the potential risks and benefits of AI in plan administration, emerging technologies and the importance of compliance with applicable laws. The training should provide an overview of legal obligations, best practices for implementing AI, and strategies for mitigating risks.
- Document Due Diligence: Maintain comprehensive documentation of all steps to assess and track AI tools. This includes records of audits, vendor communications, and updates to internal policies. Clear documentation can act as a crucial defense in the event of litigation.
- Assess Applicability of Section 1557 to Your Plan: Health and welfare plan fiduciaries should determine whether your organization’s health plan is subject to Section 1557 and whether OCR’s guidance directly applies to your operations, and if not, confirm and document why not.
Fiduciaries must remain vigilant regarding AI’s increasing role in employee benefit plans, particularly amid regulatory uncertainty. Taking proactive measures and adopting robust risk management strategies can help mitigate risks and ensure compliance with current and anticipated legal standards. By dedicating themselves to diligence and transparency, fiduciaries can leverage the benefits of AI while safeguarding the interests of plan participants. At Foley & Lardner LLP, we have experts in AI, retirement planning, cybersecurity, labor and employment, finance, fintech, regulatory matters, healthcare, and ERISA. They regularly advise fiduciaries on potential risks and liabilities related to these and other AI-related issues.