On April 25, the CFPB, FTC, EEOC, and Civil Rights Division of the DOJ issued a joint statement outlining the agencies’ collective commitment to monitor the development and use of automated systems and artificial intelligence and enforce their respective authorities where such systems produce outcomes that result in unlawful discrimination. The joint statement explains that potential discrimination in automated systems can come from different sources, including:
-
Data and Datasets. Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.
-
Model Opacity and Access. Many automated systems are “black boxes” whose internal workings are not clear to most people and, in some cases, even the developer of the tool. This lack of transparency can make it more difficult for developers, businesses, and individuals to know whether an automated system is fair.
-
Design and Use. Developers do not always understand or account for the contexts in which private or public entities will use their automated systems. Developers may design a system on the basis of flawed assumptions about its users, relevant context, or the underlying practices or procedures it may replace.
The joint statement further detailed several recent instances in which the agencies that expressed concern regarding potentially harmful used of automated systems. For example:
-
CFPB. In May of 2022, the CFPB published a circular confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology being used.
-
FTC. In June of 2022, the FTC issued a report evaluating the use and impact of AI in combatting online harms identified by Congress, which expressed concerns that AI can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. In February of this year, the FTC also warned market participants that using automated tools that have discriminatory impacts may constitute a violation of the FTC Act.
-
EEOC. In addition to enforcement activities on discrimination related to AI and automated systems, in May of 2022, the EEOC issued a technical assistance document explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions regarding job applicants and employees.
-
Civil Rights Division. In January of this year, the Civil Rights Division filed a statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services.
Putting it into Practice: Through this joint statement, the federal regulators reiterated that they are keeping a close eye on automated systems and the consumer-facing companies that utilize them. The agencies appear more committed than ever to coordinating enforcement actions in order to protect consumers, whether the alleged legal violations occur through traditional means, automated systems, or other advanced technologies. Accordingly, consumer-facing companies that utilize AI and other algorithmic tools may wish to consider implementing testing protocols to ensure that their user of AI is not causing discriminatory outcomes triggering potential violations of federal law.