In April this year the Federal Trade Commission (“FTC”) issued a press release (the “April Release”) signaling the agency’s focus on advances in artificial intelligence (“AI”) going forward. The April Release highlighted the FTC’s purported concern with how advances in AI could be utilized without “inadvertently introducing bias or other unfair outcomes” to medicine, finance, business operations, media, and other sectors.
As an aside, important differences exist among various AI-powered platforms, as others have previously observed. The Director of the Information Technology Laboratory for National Institute of Standards and Technology (“NIST”) testified in 2020 before the U.S. Homeland Security Committee that with the highest-performing AI algorithms there was not a “statistical level of significance” related to bias. As one recent example, NIST’s October 2021 evaluation of Clearview AI’s facial recognition algorithm found 99% accuracy for all demographics—underscoring the accuracy in advanced algorithms.
The April Release highlighted that the FTC has enforcement discretion for three laws referenced as being important to developers and users of AI, including:
-
Section 5 of the FTC Act: Section 5(a) of the FTC Act provides that “unfair or deceptive acts or practices in or affecting commerce . . . are . . . declared unlawful.” 15 U.S.C. § 45(a)(1). As the April Release stated, “[t]hat would include the sale or use of – for example – racially biased algorithms.”
-
Fair Credit Reporting Act (“FCRA”): The FCRA promotes the accuracy, fairness, and privacy of information in the files of consumer reporting agencies. The April Release noted that “[t]he FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.”
-
Equal Credit Opportunity Act (“ECOA”): The ECOA prohibits creditors from discriminating against credit applicants on the basis of race (or other bases), because an applicant receives income from a public assistance program, or because an applicant has in good faith exercised any right under the Consumer Credit Protection Act. The April Release commented that “[t]he ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.”
Following up on the April Release, in November FTC Chair Lina M. Khan announced several new additions to the FTC’s Office of Policy Planning. Olivier Sylvain, Meredith Whittaker, Amba Kak, and Sarah Myers West will be working with the FTC’s Chief Technology Officer and technologists as part of an informal AI Strategy Group to advise on emerging technology issues. This development points to an continued FTC emphasis on AI technologies going into the start of 2022.