On February 27, 2023, the Federal Trade Commission (FTC) published guidance aimed at artificial intelligence (AI) advertising claims. After the massively successful beta release of OpenAI’s ChatGPT, many companies, big and small, are promoting AI as part of their products and many consumers are seeking AI products. However, several companies and consumers do not have a firm understanding of AI or its capabilities. The FTC instructs those who are marketing and selling products in this space to beware: “for FTC enforcement purposes […] false or unsubstantiated claims about a product’s efficacy are our bread and butter.” It’s important for companies that market technology, specifically AI, or utilize the technology itself to understand the FTC’s evolving enforcement of AI claims to avoid future penalties.
Section 5 of the FTC Act prohibits unfair and deceptive acts and practices. In advertising, this means that all claims, whether explicit or implied, must be truthful and substantiated. Claims are viewed through the eyes of the reasonable consumer and all reasonable interpretations must be truthful, accurate and substantiated. What constitutes adequate substantiation depends on the specific claim itself.
Specifically, as it relates to AI, advertisers are warned not to exaggerate the features of their AI products or promise that the AI-enabled version will perform better than a non-AI product unless there is sufficient, adequate substantiation. The FTC also notes that even some AI purveyors may not know the capabilities or limitations of their products. Nonetheless, the FTC says AI vendors “need to know about the reasonably foreseeable risks and impact of [the] AI product before putting it on the market.” Further, if the product does not actually use AI, “FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims.”
This is not the first guidance on AI that the FTC has promulgated. Earlier FTC AI guidance from April 2021 focused on fairness and equity, and a lengthy June 2022 report at the direction of US Congress studied “how artificial intelligence (AI) may be used to identify, remove, or take any other appropriate action necessary to address a wide variety of specified online harms.” [quotation marks omitted]. Additionally, just last month, the FTC announced the launch of a new Office of Technology, further signaling the FTC’s focus on increased enforcement in the technology sector. The new office is designed to “strengthen the FTC’s ability to keep pace with the technological challenges in the digital marketplace by supporting the agency’s law enforcement and policy work.”
With the latest published guidance, it’s clear that the FTC will be scrutinizing any advertising and offerings in the AI space for truthfulness, as well as requiring that AI vendors understand the capabilities and limitations of their products.
Peter Scheyer also contributed to this article.