HB Ad Slot
HB Mobile Ad Slot
Brave New World: The EEOC's Artificial Intelligence Initiative
Tuesday, December 14, 2021

The use of artificial intelligence (“AI”) and machine learning in the workplace is growing exponentially – and specifically in hiring. Over the last two decades, web-based applications and questionnaires have made paper applications nearly obsolete. As employers seek to streamline recruitment and control costs, they have jumped to use computer-based screening tools such as “chatbots” to communicate with job applicants, to schedule interviews, ask screening questions, and even conduct video conference interviews and presentations in the selection process. Employers of all sizes are creating their own systems, or hiring vendors who will design and implement keyword searches, predictive algorithms and even facial recognition algorithms to find the best-suited candidates. The algorithms in these computer models make inferences from data about people, including their identities, their demographic attributes, their preferences, and their likely future behaviors. Now faced with the impact of COVID-19 and the growing talent shortage, employers may see the use of AI as a more efficient way through the hiring process.

However, the use of AI technology is controversial. Researchers have voiced increasing concerns about the inherent risk of automating systemic bias. Because algorithms rely on historical datasets and human inputs, the technology can generate bias, or exacerbate existing bias. AI systems can generate unfounded conclusions that may be based on protected characteristics.

In 2016, recognizing these concerns over bias, the U.S. Equal Employment Opportunity Commission (the “Commission” or “EEOC”) started to examine the impact of AI, people analytics and “big data” on employment. “Big data” was characterized as the use of algorithms, data scraping of the internet, and other technology-based methods of evaluating huge amounts of information about individuals. The Commission acknowledged in its 2016 systemic program review, that “[h]iring or nonselection remains one of the most difficult issues for workers to challenge in a private action, as an applicant is unlikely to know about the effect of hiring tests or assessments, or have the resources to challenge them.” Despite this and some reports that the agency reportedly investigated the use of an algorithm in hiring in two cases in 2019, it took little further action until 2021.

Meanwhile, some high-profile examples of AI bias flagged the risk. In 2018, Amazon stopped using an algorithmic-based resume review program when its results showed that the program resulted in bias against female applicants. In 2019, the Electronic Privacy Information Center filed an FTC complaint against recruiting company HireVue arguing that its face-scanning technology constituted an unfair and deceptive trade practice. HireVue’s “AI-driven assessments,” use video interviews to analyze hundreds of thousands of data points related to a person’s speaking voice, word selection and facial movements. With a lack of federal focus on the issue, in 2019 states began to generate legislation or resolutions. During 2021 at least seventeen states have introduced bills or resolutions, or enacted or implemented laws related to the use of AI.

Finally, on December 8, 2020, ten U.S. Senators sent a letter to then-EEOC Chair Janet Dhillon, calling on the agency to exercise its oversight authority regarding hiring technologies. The timing of the letter reflected the Senators’ concerns that businesses reopening after pandemic closures might seek to hire quickly with the potential upswing in use by employers who “turn to technology to manage and screen large numbers of applicants to support a physically distant hiring process.”

The Senators placed responsibility for addressing the risk of bias and discrimination from the use of hiring technologies squarely with the Commission, stating, “The Commission is responsible for ensuring that hiring technologies do not act as ‘built-in headwinds’ for minority groups. Effective oversight of hiring technologies requires proactively investigating and auditing their effects on protected classes, enforcing against discriminatory hiring assessments or processes, and providing guidance for employers on designing and auditing equitable hiring processes.”

Among other questions, the Senators asked the Commission to provide information about its use of “its authority to investigate and/or enforce against discrimination related to the use of hiring technologies.” The letter asked the Commission to identify what authority it could or has used to “study and investigate the development and design, use, and impacts of hiring technologies absent an individual charge of discrimination?” Notably, the letter also queried the Commission’s intention to issue guidance, regulations, and to conduct research.

Re-enter the EEOC. On October 28, 2021, EEOC Chair Charlotte A. Burrows announced a new agency initiative focused on ensuring that the use of AI by employers at all employment stages complies with federal anti-discrimination law. A companion press release outlined the EEOC’s five initial goals for its initiative, which were notably reflective of the type of action queried by the Senators:

  • Establish an internal working group to coordinate the agency’s work on the initiative;

  • Launch a series of listening sessions with key stakeholders regarding use of algorithmic tools and their employment ramifications;

  • Gather information about the adoption, design, and impact of hiring and other employment-related technologies;

  • Identify promising practices; and,

  • Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.

In making the announcement Burrows stated, “The bottom line here, really, is despite this aura of neutrality and objectivity around artificial intelligence and predictive tools that incorporate algorithms, they can end up reproducing human biases if we’re not careful and aware that we need to check for that.” She characterized the risk-benefit analysis that must attach to use of AI saying, “Artificial intelligence and algorithmic decision-making tools have great potential to improve our lives, including in the area of employment. At the same time, the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”

Burrows’ comments follow 2021 comments and articles issued by fellow EEOC Commissioner Keith Sonderling, beginning in early Spring 2021. In September, Sonderling signaled that EEOC may use Commissioner charges — agency-initiated investigations unconnected to a discrimination charge — to ensure employers are not using AI unlawfully. Sonderling explained that using Commissioner charges would assist enforcement because job applicants and employees are often unaware that they have been excluded from a certain job because of flawed or improperly designed AI software or bias. In 2021 and prior to the agency’s announcement of the AI initiative, Commission investigators participated in AI training.

The public comments by the Commissioners clearly signal that the agency has brought its focus to the use of hiring and employment technologies as an area of systemic discrimination. When the agency does this, employers using AI or other technologies should exercise caution. Despite the initiative announcement characterizing the Commission as initially concentrating on information collection, education and guidance, the investigator training points to a companion focus of enforcement. Burrows signaled this, saying, “Bias in employment arising from the use of algorithms and AI falls squarely within the Commission’s priority to address systemic discrimination.” Employers subject to a Charge investigation should be alert to Commission inquiries into their use of technology.

Proactive Steps for Employers

In light of the Commission’s statements of priority, employers using hiring and employment technology systems, such as artificial intelligence or algorithmic decision-making systems, should evaluate the systems and their use to ensure there is no ensuing bias. Regular auditing should occur to ensure bias is not introduced at a later stage. Additionally, if an employer has used or is considering use of an AI technology vendor, the employer should: (1) ensure the vendor understands the employer’s EEO obligations; (2) ask the vendor to explain how it proactively avoids bias in its process and the results; and, (3) consider making the avoidance of bias a material term of the vendor contract.

Moreover, employers using hiring and employment technology systems should implement policies related to such use including requiring any managers using such technology to report any biased results and to prohibit inappropriate or discriminatory use of such systems.

Finally, given the relative volume of Commission activity in this area, it is anticipated that the agency may be issuing Guidance on use of hiring and employment technologies soon. Employers are advised to keep informed.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins