AI “Agency” Liability: The Workday Wake-Up Call?


In Derek Mobley v. Workday, Inc., A federal court California recently allowed a lawsuit to continue against Workday, a human resources (HR) technology vendor, for potential discriminatory hiring practices stemming from an employer’s use of Workday’s HR artificial intelligence (AI) technology. The court held that an employer’s use of Workday’s HR AI technology in hiring decisions could potentially create direct liability to Workday, not just the employer, under an “agency” theory [1].

Traditionally, agency relationships involve human beings acting on behalf of another. By extending this concept to AI, the court opened an avenue for holding technology providers accountable for the actions of their AI technology in the HR context. As noted by the court [2]:

The [Plaintiff] plausibly alleges that Workday's customers delegate traditional hiring functions, including rejecting applicants, to the algorithmic decision-making tools provided by Workday.

Workday's role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject. Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one. To the contrary, courts applying the agency exception have uniformly focused on the “function” that the principal has delegated to the agent, not the manner in which the agent carries out the delegated function. 

In short, if an AI system is designed to make decisions that would typically be made by a human employee, and the AI system is relied upon by a user, the vendor of that AI could be treated as an agent of the employer for purposes of liability.

The Equal Employment Opportunity Commission (EEOC) has been vocal about the implications of AI in the workplace, emphasizing algorithmic fairness and issuing new technical guidance for employers on how to measure adverse impact when employment selection tools use AI. Consistent with these efforts, in Workday, the EEOC filed an amicus brief, in part, supporting the argument, that if the allegations are true, they could or should be considered an indirect employer or an agent. 

Beyond these opinions, several state and local laws are also emerging to address the risks of AI in HR. New York City, for instance, has enacted a law regulating the use of AI in employment decisions, mandating bias audits and transparency for algorithms used in hiring. Even though specific to the employment context, and regardless of the outcome of the case, Mobley may serve as a wake-up for other AI “use cases” beyond just HR. There is a lot of pressure to use AI to help create efficiencies and even reduce or replace human workloads. This movement, along with the concept of holding an AI vendor to the same liability as a user, raises interesting questions regarding a vendor’s liability for its own products, reinforces that a company’s responsibilities cannot be replaced simply by using AI, and even raises the broader question of allocation of liabilities among parties. This article discusses some of these as well as an “updated” list of issues to consider to help safely use AI.

Continued Responsibility for Using AI:

Mobley reminds us that companies cannot abdicate their legal responsibility for having a “human in the loop” and for the ethical and unbiased hiring practices that may be implicated by using AI tools. This is not a new concept, but the Workday court reminds us of the following:

AI in a Mobley World:

From both a vendor and buyer standpoint, the concept of holding an AI software vendor liable for its use in the same manner as its user could ultimately impact the relationship and increase the need for collaboration between vendors and buyers to understand and minimize liability with regard to ethical AI use in HR.

We generally recommend a simple but effective three prong analysis regarding any AI: 

  1. What is the use case?
  2. What is the AI being used to address the use case?
  3. What are the risks associated with using the AI for the particular “use case”?

Exercising this discipline and staying in lock step with new laws, regulations, industry standards, government guidance and case law will help ensure a responsible use and approach regarding AI. Additionally, below are strategies for both vendors and buyers:

For Buyers:

For Vendors:

The Mobley case serves as a reminder that the legal landscape surrounding AI is evolving, and with it, the responsibilities of all parties involved. By adopting a disciplined approach to AI deployment and staying informed about emerging laws and industry standards, companies can navigate this complex environment and harness the benefits of AI while minimizing legal exposure.


[1] Title VII, the ADA, and the ADEA all define the term “employer” to include “any agent of” an employer. 42 U.S.C. §§ 2000e(b), 12111(5)(A); 29 U.S.C. § 630(b)). According to the court, “...[i]ndirect employers and agents of employers qualify as “employers” under those statutes and the case law interpreting them. Mobley alleges that Workday is liable for employment discrimination on three theories: as an (1) employment agency, (2) agent of employers, and (3) indirect employer. Workday contends that, as a software vendor, it is not a covered entity under these statutes and moves to dismiss the federal claims against it. The motion to dismiss the federal claims is denied, as the FAC plausibly alleges Workday's liability on an agency theory.”

[2] Quoting Derek Mobley v. Workday, Inc., No. 4:23-cv-02353, 10 (N.D. Cal. 2024)


Copyright ©2025 Nelson Mullins Riley & Scarborough LLP
National Law Review, Volume XIV, Number 253