HB Ad Slot
HB Mobile Ad Slot
FTC Signals Intention to Begin Rulemaking on Privacy and AI, Hints at Areas of AI Focus in Congressional Report
Wednesday, June 22, 2022

The Federal Trade Commission (“FTC” or “Agency”) recently indicated that it considers initiation of pre-rulemaking “under section 18 of the FTC Act to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”  This follows a similar indication from Fall 2021 where the FTC had signaled its intention to begin pre-rulemaking activities on the same security, privacy, and AI topics in February 2022. This time, the FTC has expressly indicated that it will submit an Advanced Notice of Preliminary Rulemaking (“ANPRM”) in June with the associated public comment period to end in August, whereas it was silent on a specific timeline when it made its initial indication back in the Fall. We will continue to keep you updated on these FTC rulemaking developments on security, privacy, and AI.

Also, on June 16, 2022 the Agency issued a report to Congress (the “Report”), as directed by Congress in the 2021 Appropriations Act, regarding the use of artificial intelligence (“AI”) to combat online problems such as scams, deepfakes, and fake reviews, as well as other more serious harms, such as child sexual exploitation and incitement of violence. While the Report is specific in its purview—addressing the use of AI to combat online harms, as we discuss further below—the FTC also uses the Report as an opportunity to signal its positions on, and intentions as to, AI more broadly.

Background on Congress’s Request & the FTC’s Report

The Report was issued by the FTC at the request of Congress, which—through the 2021 Appropriations Act—had directed the FTC to study and report on whether and how AI may be used to identify, remove, or take any other appropriate action necessary to address a wide variety of specified “online harms.” The Report itself, while spending a significant amount of time addressing the prescribed online harms and offering recommendations regarding the use of AI to combat the same, as well as caveats for over-reliance on them, also devotes a significant amount of attention to signaling its thoughts on AI more broadly. In particular, due to specific concerns that have been raised by the FTC and other policymakers, thought leaders, consumer advocates, and others, the Report cautions that the use of AI should not necessarily be treated as a solution to the spread of harmful online content. Rather, recognizing that “misuse or over-reliance on [AI] tools can lead to poor results that can serve to cause more harm than they mitigate,” the Agency offers a number of safeguards. In so doing, the Agency raises concerns that, among other things, AI tools can be inaccurate, biased, and discriminatory by design, and can also incentivize relying on increasingly invasive forms of commercial surveillance, perhaps signaling what may be areas of focus in forthcoming rulemaking.

While the FTC’s discussion of these issues and other shortcomings focuses predominantly on the use of AI to combat online harms through policy initiatives developed by lawmakers, these areas of concern apply with equal force to the use of AI in the private sector. Thus, it is reasonable to posit that the FTC will focus its investigative and enforcement efforts on these same concerns in connection with the use of AI by companies that fall under the FTC’s jurisdiction. Companies employing AI technologies more broadly should pay attention to the Agency’s forthcoming rulemaking process to stay ahead of the issues.

The FTC’s Recommendations Regarding the Use of AI

Another major takeaway of the Report pertains to the series of “related considerations” that the FTC has cautioned will require the exercise of great care and focused attention when operating AI tools. Those considerations entail (among others) the following:

  • Human Intervention: Human intervention is still needed, and perhaps always will be, in connection with monitoring the use and decisions of AI tools intended to address harmful conduct.

  • Transparency: AI use must be meaningfully transparent, which includes the need for these tools to be explainable and contestable, especially when people’s rights are involved or when personal data is being collected or used.

  • Accountability: Intertwined with transparency, platforms and other organizations that rely on AI tools to clean up harmful content that their services have amplified must be accountable for both their data and practices and their results.

  • Data Scientist and Employer Responsibility for Inputs and Outputs: Data scientists and their employers who build AI tools—as well as the firms procuring and deploying them—must be responsible for both inputs and outputs. Appropriate documentation of datasets, models, and work undertaken to create these tools is important in this regard. Concern should also be given to the potential impact and actual outcomes, even though those designing the tools will not always know how they will ultimately be used. And privacy and security should always remain a priority focus, such as in their treatment of training data.

Of note, the Report identifies transparency and accountability as the most valuable direction in this area—at least as an initial step—as being able to view and allowing for research behind platforms’ opaque screens (in a manner that takes user privacy into account) may prove vital for determining the best courses for further public and private action, especially considering the difficulties created in crafting appropriate solutions when key aspects of the problems are obscured from view. The Report also highlights a 2020 public statement on this issue by Commissioners Rebecca Kelly Slaughter and Christine Wilson, who remarked that “[i]t is alarming that we still know so little about companies that know so much about us” and that “[t]oo much about the industry remains opaque.”

In addition, Congress also instructed the FTC to recommend laws that could advance the use of AI to address online harms. The Report, however, finds that—given that major tech platforms and others are already using AI tools to address online harms—lawmakers should instead consider focusing on developing legal frameworks to ensure that AI tools do not cause additional harm.

Taken together, companies should expect the FTC to pay particularly close attention to these issues as they begin to take a more active approach in policing the use of AI.

FTC: Our Work on AI “Will Likely Deepen”

In addition to signaling what areas of focus may be moving forward when addressing Congress’ mandate, the FTC veered outside of its purview to highlight its recent AI-specific enforcement cases and initiatives, describe the enhancement of its AI-focused staffing, and provide commentary on its intentions as to AI moving forward. In one notable sound bite, the FTC notes in the Report that its “work has addressed AI repeatedly, and this work will likely deepen as AI’s presence continues to rise in commerce.” Moreover, the FTC specifically calls out its recent staffing enhancements as it relates to AI, highlighting the hiring of technologists and additional staff with expertise in and specifically devoted to the subject matter area.

The Report also highlights the FTC’s major AI-related initiatives to date, including:

Conclusion

The recent Report to Congress strongly indicates the FTC’s overall apprehension and distrust as it relates to the use of AI, which should serve as a warning to the private sector of the potential for greater federal regulation over the utilization of AI tools. That regulation may come sooner than later, especially in light of the Agency’s recent ANAPR signaling the FTC’s consideration of initiating rulemaking to “ensure that algorithmic decision-making does not result in unlawful discrimination.”

At the same time, although the FTC’s Report calls on lawmakers to consider developing legal frameworks to help ensure that the use of AI tools does not cause additional online harms, it is also likely that the FTC will increase its efforts in investigating and pursuing enforcement actions against improper AI practices more generally, especially as it relates to the Agency’s concerns regarding inaccuracy, bias, and discrimination.

Taken together, companies should consult with experienced AI counsel to obtain advice on proactive measures that can be implemented at this time to get ahead of the compliance curve and put themselves in the best position to mitigate legal risks moving forward—as it is only a matter of time before regulation governing the use of AI is enacted, likely sooner rather than later.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins