The Federal Trade Commission’s (FTC) privacy and consumer protection enforcement program is in the midst of a transformative period under the leadership of Chair Lina Kahn. Three important, and new, areas of focus this year are pixel tracking, dark patterns in consumer interfaces, and artificial intelligence (AI).
Here is a quick look at where the FTC is now and tips for staying compliant with rapidly changing FTC standards.
Pixel Tracking
Earlier this year, the FTC launched an aggressive pixel tracking enforcement program, which we covered here, primarily focusing on the transmission of health-related data to social media and advertising platforms. The FTC’s attention on tracking technologies is not new, but this focus on tracking technologies and sensitive data is. The FTC’s blog posts (here and here) describing this initiative suggest that much more of this type of enforcement is coming.
One key question is whether the FTC will extend this enforcement agenda beyond health data to other types of sensitive data – for example, financial information, kids’ information, precise location data, and biometric information – and the websites and apps that collect these types of data. That seems likely: after all, the FTC has historically treated all types of sensitive data the same way. The next step would be for the FTC to take the same position on any type of personal information and possible even browsing history.
What do you need to do now? Identify all trackers on your site and apps and understand what they collect, where that data goes, and for what purposes. These technologies sometimes end up as “set-it-and-forget-it,” meaning that site and apps are frequently collecting and disseminating information without active monitoring by the site and app operators. The FTC’s enforcement activity this year makes clear that the FTC expects companies to stay on top of these trackers.
Once you’ve identified all trackers, the data they collect and transmit, and to whom and for what purpose, you’ll need to match that up with your privacy policy and other publicly facing privacy representations. Is there a conflict? The answer may not be a change to the privacy policy, as the FTC recently made clear, again, that retroactive privacy policy changes require opt-in consent. The state comprehensive privacy laws in effect now impose the same standard. The change may need to be technical, removing or disabling the trackers, at least from pages collecting sensitive information.
Dark Patterns
About one year ago, the FTC released a staff report entitled “Bringing Dark Patterns to Light.” Again, this report was a way for the FTC to outline its views and to pave the way for enforcement. And enforcement came, as we described here.
What is a “dark pattern”? The FTC says that it is a user interface that manipulates consumers into making choices they otherwise would not have made. The FTC’s position is that these interfaces can cause harm in violation of Section 5 of the FTC Act. The states (among those with new laws taking effect this year: California and Colorado) and even the United Kingdom have also outlawed dark patterns.
Examples of dark patterns are:
-
Design elements that induce false beliefs, such presenting an interface that appears unbiased — a news article or a product comparison — when it is not unbiased;
-
Design elements that hide or delay the disclosure of material information, such as failing to display, or to display adequately, a full price, including fees;
-
Unintended or unauthorized purchases and subscription enrollments, such as in-app purchases by children without accountholder approval or unintended paid subscriptions or memberships;
-
Making it difficult to cancel a service or membership;
-
Placing material terms in a general terms and conditions document or behind hyperlinks, pop-ups, or dropdown menus;
-
Presenting toggle settings leading consumers to make unintended privacy choices, highlighting a choice that results in more information collection and using default settings that maximize data collection and sharing; and
-
Failing to clearly explain choices offered to consumers.
This is a subjective standard, and although it often appears in a privacy context, it borrows from advertising law concepts. It is likely that the FTC will continue to work backwards from consumer complaints to find that practices leading to them are dark patterns, as it did in the Epic Games matter.
For this reason, the best way to avoid FTC dark patterns scrutiny is to carefully review your company’s consumer complaints to see if any patterns emerge alleging deceptive or manipulative practices. Even if you don’t think your user interfaces, including marketing communications, are deceptive or manipulative, consider making changes and continuing to monitor complaints to see if the changes result is fewer (or no) consumer complaints. You get your complaints more quickly, and in a much higher volume, than the FTC does. Understanding that complaints are a regulator’s roadmap to enforcement, resolving issues they raise promptly is a great way to reduce your exposure risk.
Artificial Intelligence
While generative artificial intelligence exploded into the public consciousness late last year, the FTC’s interest in artificial intelligence, machine learning, and automated decision making goes back over a decade, covering facial recognition, big data, Fair Credit Reporting Act and Equal Credit Opportunity Act compliance, advertising law compliance (here and here), concerns over the potential for AI outputs to be inaccurate or biased, to lead to discrimination. The FTC’s investigation of OpenAI, which was leaked in July (we covered it here), appears to focus on some of these issues, as well as on data minimization; data scraping; data security; safety; consumer controls; and outputs yielding false, misleading or disparaging statements about individuals.
FTC enforcement in the artificial intelligence space is just beginning, and it will be some time before the FTC’s enforcement priorities emerge through settlements or enforcement actions. It’s possible that the FTC ultimately joins a growing chorus of regulators around the world that take issue with widespread data scraping, or that the FTC takes the position that falling below specific safety standards is an unfair practice, in violation of Section 5.
For now, however, the most likely enforcement priorities to emerge early on seem to be advertising claims and publicly facing privacy statements about AI products, user controls that work as indicated, and the potential for AI products to reach biased or discriminatory outcomes. Shoring up these areas should meaningfully reduce your risk of FTC enforcement in the AI context, at least for now. These issues are not new to the FTC. Review your AI advertising claims to be sure that they are true and not misleading. Check your privacy representations against your actual data practices and be sure that they are consistent. And test your AI systems to ensure that outputs are not biased or discriminatory. Shoring these issues up should meaningfully reduce your AI enforcement risk, at least for now.
Conclusion
Much of the compliance activity in the United States since 2018 has been focused on the imposition of a code-based comprehensive privacy law (and regulatory) regime at the state level. This shows no real signs of abatement, with five state laws taking effect this year and many more taking effect in coming years. However, the FTC’s consumer protection and privacy enforcement program has also been very active, and the FTC will almost certainly continue to push the envelope for the rest of this year and beyond as it works to remain relevant in an increasingly crowded US regulatory environment. So, while it’s crucial to ensure state law compliance, that’s not enough; it’s important to follow FTC policy and enforcement work to round out your compliance program and protect your company from the expense and brand damage that can result from FTC enforcement.