On April 25, 2023, four federal agencies, the Civil Rights Division of the U.S. Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC), and the U.S. Equal Employment Opportunity Commission (EEOC), released a joint statement reaffirming their commitment to pursue enforcement efforts against companies using advanced technology marketed as artificial intelligence (AI) that result in discriminatory outcomes.
“Technology marketed as AI has spread to every corner of the economy, and regulators need to stay ahead of its growth to prevent discriminatory outcomes that threaten families’ financial stability,” CFPB Director Rohit Chopra said. “Today’s joint statement makes clear that the CFPB will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision making.”
The four agencies have previously expressed concern about potentially harmful uses of automated systems. The joint statement reiterates the agencies’ commitment to enforce their respective laws and regulations to prevent discriminatory outcomes and to ensure that the use of advanced technologies, including AI, is consistent with federal laws.
Agency Enforcement Authority
Existing legal authorities apply to automated systems and innovative new technologies just as to other practices. The CFPB, DOJ’s Civil Rights Division, EEOC, and FTC are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other legal protections.
-
The CFPB supervises, sets rules for, enforces numerous federal consumer financial laws, and guards consumers in the financial marketplace from unfair, deceptive, or abusive acts or practices and discrimination.
-
The FTC protects consumers from deceptive or unfair business practices and unfair methods of competition across most sectors of the U.S. economy by enforcing the FTC Act and numerous other laws and regulations.
-
The DOJ’s Civil Rights Division enforces constitutional provisions and federal statutes prohibiting discrimination across many arenas, including education, criminal justice, employment, housing, lending, and voting.
-
The EEOC enforces federal laws that make it illegal for an employer, union, or employment agency to discriminate against an applicant or employee due to a person’s race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), national origin, age (40 or older), disability, or genetic information (including family medical history).
Potential enforcement actions based on what is deemed a failure to address discriminatory outcomes may be likely, particularly where there is perceived noncompliance with the Equal Credit Opportunity Act’s prohibition against discrimination due to a business’ use of a new or complex decision-making tool. “We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” FTC Chair Lina M. Khan said. “Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”
“As social media platforms, banks, landlords, employers, and other businesses that choose to rely on artificial intelligence, algorithms and other data tools to automate decision-making and to conduct business, we stand ready to hold accountable those entities that fail to address the discriminatory outcomes that too often result,” Assistant Attorney General Kristen Clarke of DOJ’s Civil Rights Division said.
The joint statement follows a series of agency actions aimed at advanced technologies used in making credit decisions. For example:
-
The CFPB published a circular confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology used. The circular also made clear that the fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.
-
The DOJ in January 2023 filed a statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services.
-
In addition to the EEOC’s enforcement activities on discrimination related to AI and automated systems, the EEOC issued a technical assistance document explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees.
-
The FTC issued a report evaluating the use and impact of AI in combatting online harms identified by Congress. The report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. The FTC has also warned market participants that it may violate the FTC Act to use automated tools with discriminatory impacts, make claims about AI that are not substantiated, or deploy AI before taking steps to assess and mitigate risks. Finally, the FTC has required firms to destroy algorithms or other work product trained on data that should not have been collected.
The CFPB has also launched a tech worker whistleblower program portal, outlining how these workers can report conduct they believe may violate any rules or laws over which the CFPB has authority.
Takeaways
Businesses should keep in mind the continuing scrutinization of automated systems and advanced technology, particularly those marketed as AI, to ensure compliance with federal laws and regulations. Businesses should ensure that consumer disclosures appropriately explain the use of any algorithmic decision-making technology, especially where algorithms are used to assign risk scores to consumers. Additionally, companies may wish to review their decision process, not only examining inputs but self-testing outcomes to manage the risk of an appearance of discriminatory conduct.