HB Ad Slot
HB Mobile Ad Slot
Continue to Watch Out for the FTC — AI: The Washington Report
Friday, September 29, 2023

Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.

The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates have exponentially increased the federal government’s interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities.

We have covered — and will continue to cover — Congress’ attempts to get a handle on AI policy issues and make its way forward to legislation. Even the most ardent proponents of such an approach will acknowledge that legislation is not imminent and may be months, a year, or years away. AI technology will not await Congress or the regulators.

At least in the interim, it seems likely that the Biden administration and agencies will try to utilize the tools they already have. Many believe that one agency likely to continue to lean forward — and act — is the Federal Trade Commission (“FTC” or “Commission”), under the aggressive leadership of Chair Lina Khan. Activity in Washington last week only reinforced that hypothesis. Our key takeaways are:

  1. A recent speech by the Director of the FTC’s Bureau of Consumer Protection indicates that the Commission does not plan to rely on corporate self-regulation to address harms caused by AI. Rather, the current FTC will seek to use its merger review, consumer protection, and rulemaking authority to regulate AI use cases under its jurisdiction.
  2. A hearing for three nominees to the FTC (current Commissioner Rebecca Kelly Slaughter (D), Melissa Holyoak (R), and Andrew Ferguson (R)) saw consensus among the nominees about the need for the Commission to utilize its existing statutory authority over AI.
  3. A recently introduced bill, filed by Senator Ron Wyden (D-OR) and 11 co-sponsors, would mandate that certain algorithms involved in mediating access to certain essential services be subject to periodic impact assessments. To enforce this requirement, the bill would grant the FTC substantial new authority, create a new Bureau of Technology within the FTC, and add 75 employees to the Commission. 
     

A Big Week for FTC and AI

During the week of September 18, three events occurred that underscored the FTC’s intention to utilize its existing tools in the domain of AI.

First, on September 19, the chief of the Commission’s Consumer Protection Bureau, Samuel Levine, delivered a speech in which he outlined the FTC’s approach to AI regulation. Then, on September 20, the Senate Commerce Committee held a hearing for nominees to the Federal Trade Commission in which all three nominees affirmed the FTC’s commitment to utilize its existing statutory authority to regulate certain AI use cases. Finally, on September 21, Senator Ron Wyden (D-OR) introduced the Algorithmic Accountability Act of 2023, a bill that would delegate significant enforcement authority to the FTC over entities deploying certain algorithms.

A Pivot Away from Self-Regulation: Samuel Levine’s Speech on the FTC’s AI Strategy

On September 19, 2023, Director of the FTC’s Bureau of Consumer Protection Samuel Levine delivered a speech to the National Advertising Division Annual Conference. In the speech, Levine argued that absent a clear regulatory framework bolstered by generally agreed-upon principles and a strong regulator, the federal government cannot rely on up-and-coming AI companies to self-regulate.

Levine asserted that the policy decisions that led Congress “to not pass comprehensive privacy legislation” and instead rely on self-regulation “were grave mistakes.” To avoid replicating the course taken by the Commission during the era of Web 2.0, Levine explained that the FTC is utilizing a novel three-pronged strategy toward AI.

  1. Blocking mergers that are perceived to threaten technological progress. Levine asserted that the FTC would continue to take action to ensure that markets remain “open, fair, and competitive” so that the United States can “remain the leader in developing cutting-edge technologies…” On this score, Levine referenced the Commission’s successful blocking of Nvidia Corp.’s acquisition of Arm Ltd. in February 2022.
  2. Utilize existing enforcement tools to challenge unfair or deceptive uses of AI. Levine reminded firms that statutes such as the FTC Act grant the Commission the authority to bring cases against firms that utilize AI in a manner deemed unfair or deceptive. The Commission has proved its willingness to leverage this authority in a recent case against e-commerce firm Automators AI.
  3. Expand the FTC’s toolkit on AI issues through rulemaking. Through a series of proposed rules, such as a rule that would allow the Commission to seek civil penalties and monetary relief on behalf of individuals defrauded by voice-cloning tools, the FTC hopes to establish the authority to respond to certain novel AI harms.

Finally, Levine asserts that the FTC will continue to recruit technologists into the recently created Office of Technology to bolster the Commission’s ability to respond to developments in the field of AI.

Hearing Sees Agreement Among FTC Nominees About Commission’s AI Enforcement Strategy

On September 20, 2023, the Senate Commerce Committee held a hearing, inter alia, for nominees to the Federal Trade Commission and the Consumer Product Safety Commission. The nominees to the FTC included:

  • Rebecca Kelly Slaughter (D), for a second term as a Commissioner of the Federal Trade Commission.
  • Melissa Holyoak (R), for a first term as a Commissioner of the Federal Trade Commission.
  • Andrew Ferguson (R), for a first term as a Commissioner of the Federal Trade Commission.

During her five-year tenure at the FTC, Commissioner Slaughter has been quite vocal on the issue of AI regulation. The September 20 hearing saw the two Republican nominees broadly agree with Commissioner Slaughter that the FTC must utilize its existing authority to regulate certain AI use cases, although they did not commit to the specific enforcement paradigms detailed below.

Commissioner Slaughter’s Previous Statements on AI Regulation

As early as January 2020, Commissioner Slaughter commented on the potential for algorithms to perpetuate unlawful bias and detailed a range of possible means by which the Commission FTC could address these harms, including:

  • Utilizing the Commission’s “unfairness authority” under Section 5 of the FTC Act “to target algorithmic injustice.”
  • Leveraging an exception within Regulation B of the Equal Credit Opportunity Act to “encourage, non-mortgage creditors to collect demographic data on most borrowers and use it to reduce disparities and train AI and other algorithmic systems to reduce disparities.”
  • Exploring how the transparency requirements put in place by the Fair Credit Reporting Act “might lead to increased algorithmic transparency in the credit sphere.”

In August 2021, Commissioner Slaughter co-authored a paper that systematized a novel FTC enforcement doctrine called “algorithmic disgorgement,” or the enforced deletion of algorithms developed with data deemed to have been collected illegally. As we discussed in a previous newsletter, the FTC has since applied this doctrine in multiple settlements.

AI Issues Discussed During the September 20 Confirmation Hearing

With regard to AI, the September 20 hearing largely focused on how the nominees perceive the FTC’s ability to regulate AI given its statutory authority. Senator John Thune (R-SD) asked whether “the FTC has the authority to regulate AI, or should it be left to Congress?”

Commissioner Slaughter responded that while “existing law does apply to new technologies” and the “FTC Act has adapted to new technologies” in the past, “it may be that new authority is also needed” to regulate AI. “There may be things that Congress deems are problematic that go beyond what the FTC Act covers.” Ferguson agreed with Slaughter, explaining that the FTC “enforces rules and laws across all industries,” but that “a grand regulatory framework for AI” falls “squarely within the purview of Congress.” Holyoak concurred, noting that the prevalence of AI-driven “frauds and scams” means that the Commission must “apply the laws that we have” and work with Congress to fill “gaps that we think that we need to address.”

In sum, there seemed to be unanimity among the nominees regarding the FTC’s statutory authority to regulate certain use cases of AI. However, a full 5-person Commission may still have divergent views and approaches to and limits on its exercise of authority on AI. For instance, it is not yet clear whether Ferguson and Holyoak would support the Commission’s use of algorithmic disgorgement in settlements.

The Algorithmic Accountability Act: Granting the FTC Further Jurisdiction Over AI

On September 21, 2023, Senator Ron Wyden (D-OR) introduced the Algorithmic Accountability Act of 2023. This bill would require that certain entities deploying algorithms that have “legal, material, or similarly significant effect” relating to access or cost to certain essential services periodically release impact assessments of said algorithms. The bill would rely on the FTC for many aspects of its implementation and execution, including the design of reporting guidelines, the processing and publication of data from transparency reports, and enforcement.

First, the bill would direct the FTC to work with relevant stakeholders to create guidelines on transparency and impact assessment reporting. Entities subject to these reporting requirements would need to perform “ongoing testing and evaluation of the current and historical performance” of the relevant algorithms, along with other transparency requirements.

The Commission would be tasked with releasing an annual report summarizing the information from the submitted transparency reports and describing “broad trends…about performing impact assessments of automated decision systems…” This report would also make “recommendations to other regulatory agencies” about AI regulation.

To allow the FTC to exercise these responsibilities, the bill would establish a Bureau of Technology tasked with “advising the Commission with respect to the technological aspects of the functions of the Commission…” The bill would add 75 employees to the FTC, 50 to the newly established Bureau of Technology, and 25 to the Division of Enforcement of the Bureau of Consumer Protection. Enforcement of this bill would be delegated to the FTC and state attorneys general.

Raj Gambhir, Project Analyst in the firm’s Washington DC office, also contributed to this article.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins