On September 25, 2024, the Federal Trade Commission announced “Operation AI Comply.” According to FTC attorneys, “some marketers can’t resist taking advantage of that by using the language of AI and technology to try to make it seem like their products or services deliver all the answers.”
As part of Operation AI Comply, the FTC announced five cases exposing allleged AI-related deception.
First, the FTC announced four settlements involving allegedly deceptive claims about AI-driven services, three of which are purported business opportunity scams that claim to use AI to help people earn more money, faster. The agency also announced a settlement involving a company that purportedly offered a generative AI tool that let people create what the FTC alleges to be fake consumer reviews.
- DoNotPay: An FTC complaint claims U.K.-based DoNotPay told people its online subscription service acts as “the world’s first robot lawyer” and an “AI lawyer” by using a chatbot to prepare “ironclad” documents for the U.S. legal system. The complaint says DoNotPay told small businesses its service could check their websites for law violations and help them avoid significant legal fees. According to the complaint, DoNotPay’s service did not live up to the hype. The FTC welcomes comments on a proposed settlement between FTC and DoNotPay, which requires DoNotPay to stop allegedly misleading people, pay $193,000, and tell certain subscribers about the case.
- Ascend Ecom: An FTC complaint filed in California alleges a group of companies and their officers used deceptive earnings claims to convince people to invest in “risk free” business opportunities supposedly powered by AI. The FTC also says the defendants refused to honor their “risk free” money-back guarantees, and threatened and intimidated people to keep them from publishing truthful reviews. According to the complaint, the defendants’ conduct violated the FTC Act, the Business Opportunity Rule, and the Consumer Review Fairness Act.
- Ecommerce Empire Builders: In a complaint filed in Pennsylvania, the FTC claims a company and its officer violated the FTC Act and the Business Opportunity Rule with their AI-infused earnings claims. According to the complaint, in addition to failing to provide required statements and disclosures, the defendants allegedly promised people they would quickly earn thousands of dollars a month in additional income by following proven strategies and investing in online stores “powered by artificial intelligence.” The complaint also alleges the defendants made clients sign contracts keeping them from writing and posting negative reviews, in violation of the Consumer Review Fairness Act.
- FBA Machine: In June 2024, the FTC filed a complaint against a group of New Jersey-based businesses and their owner, claiming they used deceptive earning claims to convince people to invest in a “surefire” business opportunity supposedly powered by AI. According to the complaint, the defendants promised people they could earn thousands of dollars in passive income. The FTC says the defendants threatened people who tried to share honest reviews, and told people they could not get refunds unless they withdrew their complaints. According to the FTC, through these tactics, which purportedly violate the FTC Act, the Business Opportunity Rule and the Consumer Review Fairness Act, the defendants purportedly defrauded their customers of more than $15.9 million. The case is ongoing.
- Rytr: According to an FTC complaint, a Delaware-based company sold an AI-enabled writing assistant with a tool specifically designed for its customers to generate online reviews and testimonials. The complaint says Rytr customers could, “with little input, generate an unlimited number of reviews with specific details that would almost certainly not be true for those users.” According to the complaint, some Rytr customers used this tool to quickly generate thousands of alleged false reviews that would have tricked people reading those reviews online. This, the FTC says, likely harmed many people and was unfair. Rytr has agreed to a proposed settlement prohibiting the company – or anyone working with it – from advertising or selling any service promoted for generating reviews.
Takeaway: The FTC aggressively polices AI-based business tools. Products that claim they can fully replace a qualified human professional will be scrutinized. When it comes to legal or financial advice, small mistakes lead to big problems, and not every firm can back up their claims with tools that are actually equipped to address complicated, fact-intensive cases. AI tools can be a good starting point, but the FTC is skeptical of claims that they can fully replace a professional. Also, shortcute on reviews are a bad idea. Using AI tools to help a business fake its way to five stars is a recipe for disaster. Posting fake reviews can be a violation of the FTC Act and the FTC’s Rule on the Use of Consumer Reviews and Testimonials. The FTC has published a guide with tips on soliciting and paying for online reviews. Additionally, if youy are tempted to mention AI in ads to boost sales, do not say you use AI tools if you do not. If you are investigated by the FTC, the agency’s technologists and others can look at your product or service and figure out what is really going on. Just using an AI tool when you are developing your product is not the same as offering your customers a product with AI inside. Also, the FTC expects marketers to possess a “reasonable basis” for any claim made about a product or service. If special rules apply to the product or service offered, like business opportunities, those must be followed, as well. Using technological jargon or saying y product or program relies on AI changes does not change the analysis. When it comes to business opportunities, if the FTC investigates you, it will check to see whether earnings and other claims can be backed up ,and whether appropriate disclosures are being supplied. Do not claim a program uses new technologies to help consumers make more money unless it is true. Always, possess concrete data demonstrating that what is being prosmied is typical for your customers. Lastly, the foregoing cases are just the latest in the FTC’s ongoing work to combat AI-related issues in the marketplace. The agency is checking to see whether products or services actually use AI as advertised, if so, whether they work as marketers say they will. The FTC is examining whether AI and other automated tools are being used for fraud, deception, unfair manipulation, or other harmful purposes. On the back end, the FTC is looking at whether automated tools have biased or discriminatory impacts.