HB Ad Slot
HB Mobile Ad Slot
Managers Who Use ChatGPT to Promote Employees – What Could Go Wrong?
Thursday, July 24, 2025

While artificial intelligence (AI) can be a powerful tool in a manager’s arsenal when it comes to efficiently making decisions, it is essential to use it ethically and fairly. Companies are no longer relying on AI solely to automate repetitive tasks or produce predictive analytics — recent studies have shown that over 60% of managers use AI for critical employment decisions, such as hiring, firing, layoffs, and/or promotions. And more than one in five managers use AI to make these decisions without any human input. As managers increasingly — and often blindly — rely upon AI, companies may risk significant legal exposure.

Although it may be tempting to use AI to streamline employment decisions (e.g., hiring, promotion, workforce reductions), it is critical to remember that AI output merely reflects the data the system receives. These systems have no measurement for context, lack human judgment and empathy, and risk producing outcomes with unintended disparate impacts.

A Cautionary Tale (or Three)

In 2014, Amazon was one of the first companies to attempt to automate its hiring process. While testing its automated software, the company quickly noticed that the search engine was excluding female applicants. The algorithm had been trained on a decade of historical data, which reflected a male-dominated applicant pool, and as a result it learned to favor male resumes — even downgrading any that included the word “women” or references to women’s organizations. Had Amazon not backstopped the system’s decision-making process with human input, they likely would have considered few, if any, female applicants and faced serious legal consequences, including claims of gender discrimination.

Since this early case study, AI has rapidly evolved and grown in popularity. As a result, the Equal Employment Opportunity Commission (EEOC) and the Department of Labor (DOL) have recognized that AI-driven hiring and workplace decision-making may disproportionately screen out protected groups in violation of Title VII, the Americans with Disabilities Act, the Age Discrimination in Employment Act, various data privacy laws, and other applicable state and local laws. For example, an HR management services company — Workday, Inc. — is facing a collective action alleging that it violated federal antidiscrimination laws because its AI-based applicant recommendation system disproportionately rejected applications based on race, age, and disability. Similarly, the EEOC recently brought and settled a lawsuit against iTutorGroup, which implemented application review AI software that screened out female applicants aged 55 or older and male applicants aged 60 or older.

Best Practices

Even if an AI algorithm is programmed to ignore demographic information, such as race, age, or sex, certain attributes may nonetheless correlate with demographics, resulting in a disparate impact. To avoid the legal risks that AI as a management tool can create, companies should focus on the following best practices:

  • Adopt AI governance and policies. Companies should adopt AI policies that address acceptable and prohibited uses, confidentiality considerations, the importance of mitigating bias and discrimination risks, transparency, and other restrictions and guidance related to the use of AI in the workplace. Consider the following questions in adopting an effective AI policy:
    • What is the company’s approach to AI governance in its commercial business and compliance program?
    • How is the company curbing any potential negative or unintended consequences of AI?
    • How is the company mitigating the potential for deliberate or reckless misuse of AI, including by company insiders? 
    • How does the company ensure accountability over its use of AI, including the baseline of human decision-making used to assess AI and how the company trains its employees on its use?
    • How does the company monitor the use of AI over time, especially with machine learning or large language models that may change after the initial testing or deployment? 
  • Implement formal AI training for managers. Two-thirds of managers using AI to manage employees have not received any formal AI training. These new risks that AI poses suggest new training for managers so they are aware of how to use the systems effectively in a non-discriminatory fashion.
  • Validate the AI model and regularly conduct AI audits. Companies should understand the algorithms and data inputs that have been programmed into the AI system for AI tools that it uses and in circumstances where the company hires a third party to implement the AI tool. Employers may be liable for discrimination claims regardless of whether the AI tool is developed by a third party. Without a clear understanding of how an algorithm reaches its conclusions, the risk of a successful discrimination claim against your company is heightened. In addition, knowing how the algorithm may change over time due to machine learning is critical to planning periodic audits of the AI tool to prevent unintended consequences.
  • Ensure sufficient human oversight exists to justify the ultimate employment decision. Importantly, AI should be used as a helpful tool — not as a company’s sole decision-maker. Humans still need to make decisions to prevent unintended disparate impact, consider relevant context, and uphold the company’s broader values and priorities.
  • Require disclosure on the use of AI. Disclosing the use of AI not only minimizes a company’s legal risk, but it also maintains transparency and builds trust among your clients, customers, users, or employees.
  • Monitor state and federal legal developments. The Trump administration has moved away from the prior administration’s regulatory approach to AI. In contrast, various states have implemented stricter AI compliance laws, resulting in a patchwork of regulatory requirements. Colorado, Illinois, Maryland, Utah, and New York are examples of jurisdictions that have implemented legal requirements specific to AI. These laws often require independent bias audits, notification to candidates that AI will be used in making job decisions, and/or public disclosure of the types of AI that an employer deploys.

Takeaways

As AI continues to transform the workplace, employers must remain abreast of the accompanying legal risks, ethical obligations, and compliance requirements. To mitigate the risks discussed above, companies should adopt an appropriate AI governance plan, provide the requisite training to employees, and educate managers to inform them of the potential bias that can permeate a seemingly objective automated process. 

Listen to this post

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters