HB Ad Slot
HB Mobile Ad Slot
HTML Embed Code
HB Ad Slot
AI “Agency” Liability: The Workday Wake-Up Call?
Monday, September 9, 2024

In Derek Mobley v. Workday, Inc., A federal court California recently allowed a lawsuit to continue against Workday, a human resources (HR) technology vendor, for potential discriminatory hiring practices stemming from an employer’s use of Workday’s HR artificial intelligence (AI) technology. The court held that an employer’s use of Workday’s HR AI technology in hiring decisions could potentially create direct liability to Workday, not just the employer, under an “agency” theory [1].

Traditionally, agency relationships involve human beings acting on behalf of another. By extending this concept to AI, the court opened an avenue for holding technology providers accountable for the actions of their AI technology in the HR context. As noted by the court [2]:

The [Plaintiff] plausibly alleges that Workday's customers delegate traditional hiring functions, including rejecting applicants, to the algorithmic decision-making tools provided by Workday.

Workday's role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject. Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one. To the contrary, courts applying the agency exception have uniformly focused on the “function” that the principal has delegated to the agent, not the manner in which the agent carries out the delegated function. 

In short, if an AI system is designed to make decisions that would typically be made by a human employee, and the AI system is relied upon by a user, the vendor of that AI could be treated as an agent of the employer for purposes of liability.

The Equal Employment Opportunity Commission (EEOC) has been vocal about the implications of AI in the workplace, emphasizing algorithmic fairness and issuing new technical guidance for employers on how to measure adverse impact when employment selection tools use AI. Consistent with these efforts, in Workday, the EEOC filed an amicus brief, in part, supporting the argument, that if the allegations are true, they could or should be considered an indirect employer or an agent. 

Beyond these opinions, several state and local laws are also emerging to address the risks of AI in HR. New York City, for instance, has enacted a law regulating the use of AI in employment decisions, mandating bias audits and transparency for algorithms used in hiring. Even though specific to the employment context, and regardless of the outcome of the case, Mobley may serve as a wake-up for other AI “use cases” beyond just HR. There is a lot of pressure to use AI to help create efficiencies and even reduce or replace human workloads. This movement, along with the concept of holding an AI vendor to the same liability as a user, raises interesting questions regarding a vendor’s liability for its own products, reinforces that a company’s responsibilities cannot be replaced simply by using AI, and even raises the broader question of allocation of liabilities among parties. This article discusses some of these as well as an “updated” list of issues to consider to help safely use AI.

Continued Responsibility for Using AI:

Mobley reminds us that companies cannot abdicate their legal responsibility for having a “human in the loop” and for the ethical and unbiased hiring practices that may be implicated by using AI tools. This is not a new concept, but the Workday court reminds us of the following:

  • Employer Liability Remains: The court emphasized that employers cannot use AI as a shield to avoid discrimination claims. Ultimately, the company that deploys the AI technology remains liable for its use.
  • Don't Be Blinded by Automation: The allure of AI efficiency should not blind you to the potential for bias or ultimate responsibility. Careful intake process and evaluation of new “use cases” will be required to understand the use of AI technology and the risks associated with using the AI.

AI in a Mobley World:

From both a vendor and buyer standpoint, the concept of holding an AI software vendor liable for its use in the same manner as its user could ultimately impact the relationship and increase the need for collaboration between vendors and buyers to understand and minimize liability with regard to ethical AI use in HR.

We generally recommend a simple but effective three prong analysis regarding any AI: 

  1. What is the use case?
  2. What is the AI being used to address the use case?
  3. What are the risks associated with using the AI for the particular “use case”?

Exercising this discipline and staying in lock step with new laws, regulations, industry standards, government guidance and case law will help ensure a responsible use and approach regarding AI. Additionally, below are strategies for both vendors and buyers:

For Buyers:

  • Scrutinize Vendor Contracts: Negotiate detailed contracts with robust indemnification clauses to protect your company from liabilities associated with discriminatory AI use. Include warranties guaranteeing that the AI complies with all relevant anti-discrimination laws and regulations. Consider adding audit rights to ensure transparency in the vendor's AI processes.
  • Conduct AI Testing: Before deploying AI, ensure through in-house testing, independent third parties, and/or sufficient documentations that the algorithms are scrutinized for any potential biases and address these issues before they impact real-world decisions. 
  • Implement Ongoing Monitoring: Establish continuous monitoring procedures to ensure the AI remains compliant with anti-discrimination laws and ethical principles over time. Use regular audits and performance reviews to detect any deviations or emerging biases, and work with vendors to make necessary adjustments.
  • Educate and Train Your Team: Provide comprehensive training to HR personnel and other users of AI technology to identify and mitigate potential biases. Ensure that human oversight is maintained throughout the hiring process to catch any AI-driven errors or unfair outcomes.
  • Develop Ethical AI Policies: Create and enforce comprehensive AI ethics policies that commit to the fair, transparent, and non-discriminatory use of AI across all company operations. This policy should be integrated into the company’s broader compliance framework and include clear guidelines for responsible AI use.

For Vendors:

  • Enhance Product Development Protocols: Implement rigorous testing and validation processes during AI development to identify and eliminate biases. This should include diverse data sets and continuous iteration to improve fairness and accuracy.
  • Offer Transparent Reporting: Provide clients with detailed documentation and reporting on the AI's development, testing, and deployment processes. Transparency builds trust and allows buyers to better understand the steps taken to mitigate bias and discrimination risks.
  • Incorporate Feedback Mechanisms: Develop channels for users to report issues or biases they encounter with the AI. Incorporate this feedback into ongoing development and improvement cycles to enhance the AI’s performance and compliance over time.

The Mobley case serves as a reminder that the legal landscape surrounding AI is evolving, and with it, the responsibilities of all parties involved. By adopting a disciplined approach to AI deployment and staying informed about emerging laws and industry standards, companies can navigate this complex environment and harness the benefits of AI while minimizing legal exposure.


[1] Title VII, the ADA, and the ADEA all define the term “employer” to include “any agent of” an employer. 42 U.S.C. §§ 2000e(b), 12111(5)(A); 29 U.S.C. § 630(b)). According to the court, “...[i]ndirect employers and agents of employers qualify as “employers” under those statutes and the case law interpreting them. Mobley alleges that Workday is liable for employment discrimination on three theories: as an (1) employment agency, (2) agent of employers, and (3) indirect employer. Workday contends that, as a software vendor, it is not a covered entity under these statutes and moves to dismiss the federal claims against it. The motion to dismiss the federal claims is denied, as the FAC plausibly alleges Workday's liability on an agency theory.”

[2] Quoting Derek Mobley v. Workday, Inc., No. 4:23-cv-02353, 10 (N.D. Cal. 2024)

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins