Possible Bias in AI Selectivity? EEOC Issues Guidance on Use of Artificial Intelligence in Employment Selection Procedures

The recent advancements in Artificial Intelligence (AI) and the emergence of ChatGPT have opened up new possibilities for employers to enhance efficiency and reduce human error. But with these advances comes potential risk.  Acknowledging the advent of the use of AI in the workplace, the Equal Employment Opportunity Commission (EEOC) launched its Artificial Intelligence and Algorithmic Fairness Initiative in late 2021.  On May 18, 2023, the EEOC released a technical assistance document addressing the use of AI by employers when making employment decisions such as hiring, promotion, and firing. 

Although AI may have somewhat different meanings, Congress has defined “AI” to mean a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”  (National Artificial Intelligence Act of 2020, section 5002(3)).  In recent years, employers have started to use AI to scan resumes for keywords, monitor employee keystrokes for productivity measures, and test employees for “job fit.”  AI also is being used for interviewing candidates to evaluate facial expression and speech patterns or as testing software to measure personality, aptitude and cognitive skills to determine “job fit.”

Using AI to screen, promote and select candidates for termination falls within the EEOC’s 1978 Uniform Guidelines on Employee Selection Procedures (“Guidelines”), issued as part of Title VII of the Civil Rights Act of 1964.  Although AI may seem like the perfect solution for the hiring process, free from human biases that can lead to errors, the use of AI may have a disparate impact, where neutral tests or selection procedures disproportionately exclude individuals based on a protected category (provided these procedures are not “job-related for the position in question and consistent with business necessity”).  Accordingly, employers utilizing AI, either directly or through third-party software vendors, can be held responsible if their AI practices violate Title VII’s standards for disparate treatment and disparate impact.

Recommendations for Employers Using AI in the Workplace

To ensure fair and unbiased AI usage in the workplace, employers should consider the following recommendations:

  1. Conduct a job analysis by identifying the essential functions, duties, skills, qualifications, and competencies required for each position.  Ensure that the AI system aligns with the identified criteria to avoid any implicit bias that already may be present in an industry.

  2. Validate the AI system by testing and evaluating the AI system to ensure its accuracy, validity, and freedom from bias for the intended purpose and target population. Conduct disparate impact analyses to help identify any marginalized groups and update any AI systems regularly to address any issues.

  3. Involve human decision-makers who can review and override the AI system’s results if necessary. Provide training and guidance to these decision-makers on how to use the system properly and ethically.

© 2024 Vedder Price
National Law Review, Volumess XIII, Number 200