Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies. The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates have exponentially increased the federal government’s interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities.
In this issue, we discuss the July 2023 Civil Investigative Demand (“CID”) the Federal Trade Commission (“FTC” or “Commission”) served to ChatGPT developer OpenAI. Our key takeaways are:
-
The OpenAI CID concerns the question of whether OpenAI “engaged in unfair or deceptive privacy or data security practices” or “engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm…”
-
Utilizing the information gathered from the CID, the Commission may allege that OpenAI’s conduct has violated Section 5 of the FTC Act, possibly resulting in fines or a consent decree.
-
For the FTC, this investigation is a significant and concrete step towards the realization of the Commission’s stated aim of applying its consumer protection powers to the domain of AI.
FTC Civil Investigative Demand Served to OpenAI Signals Intent to Apply Current Law to AI
On July 13, 2023, The Washington Post released a partially redacted Civil Investigative Demand (“CID” or “OpenAI CID”) served by the Federal Trade Commission (“FTC”) to OpenAI, the developer of ChatGPT. The OpenAI CID contains 49 inquiries related to the company’s business practices and requests 17 classes of documents. The stated subject of the FTC’s investigation is to ascertain whether OpenAI “engaged in unfair or deceptive privacy or data security practices” or “engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm…”
As we have discussed in previous newsletters, in the absence of clear regulatory guidelines from Congress, regulatory agencies have sought to utilize their existing authority to regulate the growing tide of AI products and services. Of these regulatory agencies, it is the FTC that has been the most active in this field, releasing non-binding business guidance, co-signing the joint statement on bias in automated systems,[1] and pioneering a new form of enforcement known as algorithmic disgorgement.
As discussed in our newsletter on the FTC’s forays into AI enforcement, the current Commission asserts that the FTC’s statutory authority under laws — including Section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act — grant it the ability to regulate the conduct of AI developers whose practices violate these laws. In the case of the OpenAI CID, the FTC has requested information and documents from the company to ascertain if there is a case to be made that OpenAI’s conduct has violated Section 5 of the FTC Act.[2]
The Subject of the OpenAI CID
As stated above, the OpenAI CID concerns whether the company has committed either of the following alleged violations of the FTC Act.
-
Mishandling of user data: Whether OpenAI has engaged in unfair or deceptive data privacy or data security practices.
-
Causing reputational harm to consumers: Whether OpenAI has engaged in unfair or deceptive practices that risked harm to consumers. The Commission’s focus here is the question of the extent to which OpenAI’s products cause “reputational harm” to consumers through the making of “false, misleading, disparaging, or harmful statements” about individuals.
Past FTC investigations have dealt with both of these classes of potential Section 5 violations. The Commission has entered into a number of consent decrees with large technology companies regarding their data privacy and security practices, most notably with Facebook in November 2011. The FTC has also brought cases that have included claims of reputational harm to consumers, including a 2002 online privacy case involving pharmaceutical company Eli Lily allegedly disclosing “an email list of more than six hundred consumers using Prozac” without those customers’ consent.
What is notable about the OpenAI CID, then, is not the potential violations of the FTC Act that the Commission is investigating, but rather the CID’s role in the FTC’s ongoing effort to assert its role as a leading regulator of AI. While FTC business guidance and statements by leadership have signaled the Commission’s intention to regulate this space, the serving of the OpenAI CID represents a significant and concrete step towards the realization of that goal. Regardless of whether the FTC is able to build a case that OpenAI committed Section 5 violations, and whether any such hypothetical case has merit, the outcome of this investigation could have significant implications for determining the scope of the FTC’s existing authority to regulate AI developers.
Content of the OpenAI CID
To ascertain whether OpenAI “engaged in unfair or deceptive privacy or data security practices,” the FTC requested that the company provide the following information regarding its business practices.
-
The data used to train OpenAI’s large language models (“OpenAI models”), the sources of said data, the categories of content included in such data, and all policies and procedures related to the vetting and selecting of said data.
-
Internal policies regarding the oversight of human reviewers of the OpenAI models.
-
Procedures followed to assess the safety of products made with OpenAI models prior to its launch. Processes surrounding the retraining and refining of OpenAI models.
-
Processes for preventing personal information from being included in the training data of OpenAI models.
-
Steps taken to mitigate the exposure of users’ personal information via API integrations and plug-ins.
-
OpenAI’s handling of a March 2023 security incident in which users’ sensitive personal data were temporarily leaked.
To test whether OpenAI “engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm,” the FTC requested the following information:
-
Policies and procedures related to OpenAI’s large language model’s generation of statements about individuals, including processes for handling reports that models “have made…false, misleading, disparaging, or harmful statements” about individuals.
-
Steps taken to test and respond to the capacity of products made with OpenAI models to generate “statements about real individuals that are false, misleading, or disparaging.”
-
Processes for determining whether an individual is a “public figure” and whether that determination impacts the large language model’s treatment of that individual.
Conclusion: A Model for Future FTC Enforcement Actions on AI?
By serving a CID to leading AI developer OpenAI, the FTC has staked out a view that it can, under current law, police certain aspects of the evolution of AI. The release of ChatGPT in November 2022 set off the current wave of interest in AI regulation seen around the world. OpenAI CEO Sam Altman has been a conspicuous presence in the halls of Congress, as he has called for AI regulation in multiple Congressional hearings. FTC investigations are normally confidential, so details surrounding the course of this CID and the FTC’s evaluation of these issues vis-à-vis OpenAI will be difficult to obtain in real time. We will continue to monitor FTC initiatives in this area.
Raj Gambhir contributed to this article
FOOTNOTES
[1] The April 2023 “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems,” signed by key officials from the FTC, Consumer Financial Protection Bureau, Department of Justice, and Equal Employment Opportunity Commission, asserts that the existing “legal authorities” of the undersigned agencies “apply to the use of automated systems.” For a more detailed analysis, please reference our previous newsletter on the FTC.
[2] Section 5 of the FTC Act prohibits “unfair or deceptive acts or practices in or affecting commerce…”