RETAIL INDUSTRY 2021 YEAR IN REVIEW
In October 2021, the Equal Employment Opportunity Commission (EEOC) announced an initiative to ensure that artificial intelligence (AI) used in the workplace does not lead to violations of the nation’s antidiscrimination laws. The EEOC, through an internal working group, plans to gather information about the design, adoption and impact of hiring and employment-related technologies, launch listening sessions with key stakeholders and issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions. The EEOC’s press release announcing the initiative can be found here.
The announcement should come as no surprise to those monitoring the EEOC’s movements. Indeed, the EEOC’s interest in AI can be traced back to an October 2016 public EEOC meeting discussing the use of big data in the workplace.
Employment lawyers, EEOC commissioners and computer scientists at that meeting agreed that AI should not be viewed as a panacea for employment discrimination. The technology, if not carefully implemented and monitored, can introduce and even exacerbate unlawful bias. This is because algorithms generally rely on a set of human inputs, such as resumés of high-performing employees. If those inputs lack diversity, the algorithm may reinforce existing institutional bias at breakneck speed.
Recently, EEOC commissioners have remarked that the agency is wary of undisciplined AI implementation that may perpetuate or accelerate bias in the workplace. As a result, the EEOC may consider the use of commissioner charges— agency-initiated investigations unconnected to an employee’s charge of discrimination—to ensure employers are not using AI in an unlawful manner that violates Title VII of the Civil Rights Act (Title VII) or the Americans with Disabilities Act (ADA).
Given the EEOC’s brightening spotlight on AI, retailers using such technologies should take steps to minimize the risks and maximize the benefits. This article will offer an overview of the merits and potential pitfalls of employmentrelated AI technologies, provide a refresher course on commissioner charges and propose actions that retailers can take to reduce the risk of becoming the target of EEOC investigations.
Benefits of AI for Retailers
The landscape of AI technology is continually growing. Some retailers use automated candidate sourcing technology to search social media profiles to determine which job postings should be advertised to particular candidates. Others use video interview software to analyze facial expressions, body language and tone to assess whether a candidate exhibits preferred traits. The use, however, is not limited to the hiring process. Some retailers utilize AI software for workforce optimization—allowing AI to create employee schedules, taking into account a multitude of variables such as employee availability, local or regional pay and timekeeping laws, as well as business initiatives and seasonal fluctuations.
Regardless of the precise tool, AI is marketed to retailers as a technological breakthrough that provides simplicity, enhances the quality of candidates, promotes efficiency and improves diversity.
Perhaps the most obvious of these benefits is time. AI can, for example, save recruiting departments from countless hours of pouring over resumés for acceptable candidates. This is particularly true for larger retailers who receive thousands of applications each year. That freed up time can be spent on more productive activities.
AI also can expose retailers to uncharted pools of talent, and with a larger umbrella of candidates, retailers can expect more diverse and qualified new hires. Even more, removing or curtailing human decision making can help remove unconscious, or even intentional, human biases from hiring, scheduling and other employmentrelated decisions
Potential for Discrimination
Although AI promises significant rewards, there is considerable risk involved. Although AI tools likely have no intent to unlawfully discriminate, that does not absolve them from liability. This is because the law contemplates both intentional discrimination (disparate treatment) as well as unintentional discrimination (disparate impact). The larger risk for AI lies with disparate impact claims. In such lawsuits, intent is irrelevant. The question is whether a facially neutral policy or practice (e.g., use of an AI tool) has a disparate impact on a particular protected group, such as on one’s race, color, national origin, gender or religion.
The diversity of AI tools means that each type of technology presents unique potential for discrimination. One common thread, however, is the potential for input data to create a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resumé screening tool is often set up by uploading sample resumés of high-performing employees. If those resumés favor a particular race or gender, and the tool is instructed to find comparable resumés, then the technology will likely reinforce the existing homogeneity.
Some examples are less obvious. Sample resumés may include employees from certain zip codes that are home to predominately one race or color. An AI tool may favor those zip codes, disfavoring applicants from other zip codes of different racial composition. Older candidates may be disfavored by an algorithm’s preference for “.edu” email addresses. In short, if a workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another race or gender.
Commissioner Charges as a Tool to Investigate AI-based Discriminatory Impacts
The potential for AI to reject hundreds or thousands of job applicants based on biased inputs or flawed algorithms has the EEOC’s attention. And because job applicants are often unaware that they were excluded from certain positions because of flawed or improperly calibrated AI software, the EEOC may rely upon commissioner charges as an important tool to uncover unlawful bias under Title VII and the ADA, most likely under the rubric of disparate impact discrimination.
42 U.S.C. § 2000e-5(b) authorizes the EEOC to investigate possible discrimination “filed by or on behalf of a person claiming to be aggrieved, or by a member of the Commission.” (emphasis added). Unlike employee-initiated charges, commissioner charges can be proposed by “any person or organization.” Indeed, it is the origin that distinguishes commissioner charges from employee-initiated ones.
The EEOC has explained that commissioner charges generally come about if 1) a field office learns of possible discrimination from local community leaders, direct observation or a state-run fair employment office; 2) a field office learns of a possible pattern or practice of discrimination during its investigation of an employee charge; or 3) a commissioner learns about discrimination and asks a field office to perform an investigation.
Regional EEOC field offices refer proposed requests for commissioner charges to the EEOC’s Executive Secretariat, who then distributes such requests to the commissioners on a rotating basis. A commissioner then determines whether to sign a proposed charge, authorizing the field office to perform an investigation. Commissioners, however, can bypass this referral procedure and file a charge directly with a regional field office.
Once filed, commissioner charges follow the same procedure as employee-initiated charges. The respondent is notified of the charge and the EEOC requests documents and/or interviews with company personnel. If needed, the agency can utilize its administrative subpoena power and seek judicial enforcement. The EEOC’s regulations provide that the commissioner who signed the charge must abstain from making a determination in the case.
If the agency ultimately determines that there is reasonable cause to believe discrimination occurred, the EEOC will generally attempt conciliation with the employer. The same remedies available under Title VII disparate impact claims— equitable relief in the form of back pay and/or injunctive relief—are available to aggrieved individuals.
Steps to Mitigate Discrimination Risks
Retailers should be mindful of the EEOC’s awareness on this topic and the availability of commissioner charges to uncover disparate impacts without the need for an employee charge. To avoid being the target of such investigations, retailers should consider the following steps:
First, those considering an AI tool should demand that AI vendors disclose sufficient information to explain how the software makes employment decisions. Vendors often do not want to disclose proprietary information relating to how their tools function and interpret data. Retailers may ultimately be liable for their results, however, so it is important that they understand how candidates are selected. At minimum, a retailer should obtain strong indemnity rights.
Second, even after obtaining assurances and indemnification, retailers should consider auditing the AI tool before relying upon it for decisions. To do this, retailers need to be able to identify the candidates that the tool rejected, not just those who were accepted. Thus, retailers should verify with vendors that data is preserved so that they can properly audit the tool and examine results to determine whether there was a negative impact on individuals in protected classes. This auditing should occur regularly—not solely at initial implementation.
Third and perhaps most critically, retailers should ensure that the input or training data upon which the tool relies (e.g., resumés of model employees) does not reflect a homogenous group. If the input data reflects a diverse workforce, a properly functioning algorithm should, in theory, replicate or enhance that diversity.
Finally, as this is an emerging field, retailers need to stay abreast of developments in the law. When in doubt, companies should consult with employment counsel when deciding whether and how to use AI to improve the productivity, diversity and capability of their workforce.