HB Ad Slot
HB Mobile Ad Slot
FTC Launches Operation AI Comply with Five Enforcement Actions Involving AI Misuse – AI: The Washington Report
Thursday, October 3, 2024
  • On September 25, 2024, the Federal Trade Commission (FTC) announced five enforcement actions against companies that allegedly used AI to further deceptive or unfair conduct.
  • The actions are part of the FTC’s Operation AI Comply, the agency’s new law enforcement sweep of the misuse of AI.
  • The new actions highlight the agency’s resolve to use its existing authority to enforce general laws against unfair or deceptive conduct as it pertains to AI. 
     

 
On September 25, 2024, the FTC launched Operation AI Comply, the agency’s new law enforcement sweep that continues its focus on companies that use AI to further deceptive or unfair conduct. As part of the sweep, the FTC announced enforcement actions against five companies, including a company that promised AI-powered legal services, another company that sold AI-generated reviews and testimonials, and three companies that used the hype around AI to lure consumers into bogus schemes.

The FTC’s new law enforcement sweep of AI underscores the agency’s focus on using its existing authority to enforce general laws against unfair or deceptive conduct related to AI and crack down on AI misuse. “Using AI tools to trick, mislead, or defraud people is illegal,” said FTC Chair Lina Khan. “By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”

FTC Cracks Down on AI

In past newsletters, we’ve highlighted the FTC’s resolve to use its existing authority to crack down on the misuse of AI. In October 2023, as we covered, Chair Khan made clear her view that “there's no AI exemption to the laws on the books.” Commissioner Slaughter concurred, arguing, “FTC’s prohibitions against unfair and deceptive practices and unfair methods of competition apply to applications of AI just as much as they have to every other new technology that has been introduced in the market in the last one hundred years.”

In the past year, the FTC has taken a number of enforcement actions to crack down on companies that use AI to make deceptive claims or advance fraudulent schemes. Last August, as we covered, the FTC’s complaint against Automators AI marked the agency’s first individual case concerning AI-related misleading claims. In the past year, the FTC has brought complaints against Rite Aid and other companies for their misuse of AI, further demonstrating the FTC’s willingness to bring enforcement actions against companies that use AI in ways that violate the FTC Act and other statutes.

Operation AI Comply

With the launch of Operation AI Comply, the FTC took enforcement actions against the following five companies, alleging that the companies used AI to promote false claims or advance bogus business schemes:

  1. DoNotPay
    The FTC’s complaint alleges that DoNotPay advertised its online subscription service as an “AI lawyer” that could “generate perfectly valid legal documents in no time” and file lawsuits on behalf of people. However, DoNotPay did not employ or retain attorneys, and its AI chatbot’s outputs were never tested against that of a human lawyer, according to the FTC. The FTC argued that DoNotPay’s false claims and misleading statements constituted unfair or deceptive practices. DoNotPay has agreed to settle the charges brought against it by paying $193,000 and stopping making claims “about its ability [to use AI] to substitute for any professional service without evidence to back it up.”
  2. Ascend Form
    The FTC charged Ascend Form with falsely claiming that its “cutting edge” AI-powered tools can help consumers open online storefronts to quickly and passively earn thousands of dollars a month. However, according to the FTC’s complaint, “virtually none of Ascend’s clients [earned] the advertised income.” The FTC argued that Ascend’s false and unsubstantiated claims constitute deceptive acts in violation of Section 5 of the FTC Act. In response to the FTC’s complaint, a federal court ordered Ascend to temporarily halt its scheme, while the case is ongoing.
  3. Ecommerce Empire Builders
    The FTC filed a lawsuit against Ecommerce Empire Builder’s (EEB) business opportunity scheme that falsely claimed to help consumers launch businesses powered by AI. EEB encouraged consumers to harness the “power of artificial intelligence” to “skip the guesswork and start a million-dollar business today.” However, the FTC’s complaint alleges that EEB did not have evidence to substantiate its claims. In response to the FTC’s complaint, a federal court ordered Ascend to temporarily halt its scheme.
  4. FBA Machine
    In June, the FTC filed a complaint against FBA Machine for allegedly running a business scheme involving AI-powered online stores that falsely promised to bring in guaranteed income for consumers. FBA also claimed to offer an e-commerce automation training program that would help consumers earn substantial income. The FTC alleged that FBA Machine’s claims were false and misleading and cost consumers up to $15.9 million. A federal court has ordered FBA to temporarily halt its scheme.
  5. Rytr
    The FTC charged Rytr with promoting and selling an AI-powered writing assistant service that generates false reviews and testimonials. The complaint alleges that the AI-generated fake reviews could potentially deceive customers into making purchasing decisions. The FTC argues that Rytr violated the FTC Act by not only generating false and deceptive content but also furnishing its users with the means and instrumentalities to generate such content. The proposed order would prohibit the company from generating reviews or promoting testimonial generation services.

    The Rytr case was the only controversial case that was issued on a party-line 3-2 vote. The dissenting Commissioners were concerned that there were not enough concrete actions to justify issuing the complaint. The case against Rytr, as Commissioner Holyoak points out, “does not allege that users [of Rytr’s AI-generated review service] actually posted any draft reviews.” Instead, Commissioner Ferguson notes that “the Commission reasons that a business could use Rytr’s tool to create false or deceptive consumer reviews that the business could then pass off as authentic reviews in violation of Section 5.” According to Commissioner Ferguson, “Treating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud is inconsistent with our precedents” and “risks strangling a potentially revolutionary technology in its cradle.”

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins