In July, we noted that the Federal Trade Commission was an early mover in shaping standards for generative AI, having just served OpenAI with a Civil Investigative Demand seeking detailed responses to almost two hundred data requests,1 consistent with FTC Chair Lina Khan’s earlier pledge to “update our approach to keep pace with new learning technologies and technological shifts.”2
This week, the FTC announced a proposed order that would settle its investigation into the Rite Aid chain of pharmacies use of AI-powered facial recognition technology. The order will replace and supersede a 2010 order imposed upon the company for failure to adequately protect the sensitive PHI of its customers by, among other things, improperly disposing of records containing PHI in ordinary trash.
Many readers are likely aware of the significant rise in retail shoplifting, including by “flash mobs” of semi-organized thieves. The FTC’s new Complaint centers on Rite Aid’s use of AI facial recognition technology to combat shoplifting from 2012 through 2022. The system apparently attempted to identify customers that Rite Aid deemed likely to engage in shoplifting and other criminal behavior and logged them as “enrollees” to “drive and keep persons of interest out” of the stores. According to the Complaint, the system generated alerts sent to employees indicating that individuals who had entered Rite Aid stores matched those on the watchlist.
Naturally, Rite Aid’s system resulted in employees confronting the “matching” individuals and sometimes subjecting them to verbal allegations of criminal behavior, detaining them, searching them and even reporting “criminal activity” to the police. However, thousands of false positive matches apparently resulted. False positives and false negatives, often referred to as Type I and Type II errors, respectively, are inherent in any system – including human decision systems. Here, however, Rite Aid failed to take what the FTC considered reasonable steps to prevent false positives from impacting their customers. These included using low quality images from CCTV cameras as input data, failing to train employees properly, failing to test or monitor the system for accuracy or to even measure rates of false positives. Some of the examples would be comedic if not for the real embarrassment suffered by innocent Rite Aid customers. For example, the system generated thousands of hits for the same “enrollee” within very short periods of time even though the stores were thousands of miles apart.
As part of the order Rite Aid is banned for 5 years from using or deploying any facial recognition systems (on customers). In addition, they must destroy all images and analyses and identify all third parties that received photos and videos as part of the offending system. After the 5-year ban, Rite Aid must conduct a written assessment of potential risks to consumers from the use of any automated biometric security or surveillance system. Referred to as a System Assessment, it will involve (among other things) designating qualified employees to coordinate and be responsible for the system and creating written documentation involving system assessments, testing rates of accuracy and likelihood of error, documenting the algorithms used for machine learning along with the data sets used for training, the demographic and geographic contexts where such systems are deployed, training and oversight of the employees that react to the outputs of these systems.
The System Assessment requirement is akin to the FTC’s approach in cases involving cybersecurity incidents and data privacy violations. For example, in the span of two days in July 2019 the FTC announced a $700 million settlement with Equifax for a data breach, and a $5 billion settlement with Facebook for violating consumers’ privacy. The accompanying orders mandated implementation of an Information Security Program and a Data Privacy Program with each program containing multiple technical requirements that are now standard in all FTC cybersecurity and data privacy settlements.3
As with the Information Security and Data Privacy Programs, the System Assessment requirement is detailed and comprehensive – it includes about 40 technical requirements. Any entity that deploys facial recognition or other biometric technologies in connection with their customers should read the order in detail and ask whether their own systems could avoid the FTC’s ire and comply with the System Assessment requirement. In another important way the FTC is consistent: at the conclusion of each of the above orders is the requirement that parties “[e]valuate and adjust the Program [or Assessment] in light of any circumstances that [they] know may materially affect the Program’s effectiveness.”4 With a nod to the ”technological shifts” referred to by Chairman Khan, organizations aiming to benefit from AI systems for biometric security and surveillance should apply the System Assessment or a variation thereof.
[1] AI Cybersecurity, Data Privacy Standards Coming Soon from White House (natlawreview.com)
[3] See, e.g., https://www.ftc.gov/system/files/documents/cases/172_3203_equifax_proposed_order_7-22-19.pdf at 12-19, and https://www.ftc.gov/system/files/documents/cases/182_3109_facebook_order_filed_7-24-19.pdf at 16-20.
[4] https://www.ftc.gov/system/files/ftc_gov/pdf/2023190_riteaid_stipulated_order_filed.pdf at 12.