With the increase in AI-related litigation and regulatory action, it is critical for companies to monitor the AI technology landscape and think proactively about how to minimize risk. To help companies navigate these increasingly choppy waters, we’re pleased to present part two of our series, in which we turn our focus to regulators, where we’re seeing increased scrutiny at the state level amidst uncertainty at the federal level.
FTC Led the Charge but Unlikely to Continue AI "Enforcement Sweep"
As mentioned in part one of our series, last year regulators at the Federal Trade Commission (FTC) launched “Operation AI Comply,” which it described as a “new law enforcement sweep” related to using new AI technologies in misleading or deceptive ways.
In September 2024, the FTC announced five cases against AI technology providers for allegedly deceptive claims or unfair trade practices. While some of these cases involve traditional get-rich-quick schemes with an AI slant, others highlight the risks inherent in the rapid adoption of new AI technologies. Specifically, the complaints filed by the FTC involve:
- An “AI lawyer” who was supposedly able to draft legal documents in the U.S. and automatically analyze a customer’s website for potential violations.
- Marketing of a “risk free” business powered by AI that refused to honor money-back guarantees when the business began to fail.
- Claims of a get-rich-quick scheme that attracted investors by claiming they could easily invest in online businesses “powered by artificial intelligence.”
- Positioning a business opportunity supposedly powered by AI as a “surefire” investment and threatening people who attempted to share honest reviews.
- An “AI writing assistant” that enabled users to quickly generate thousands of fake online reviews of their businesses.
Since these announcements, dramatic changes have occurred at the FTC (and across the federal government) as a result of the new administration. Last month, the Trump administration appointed FTC Commissioner Andrew N. Ferguson as the new FTC chair, and Mark Meador’s nomination to fill the FTC Commissioner seat left vacant by former chair Lina M. Khan appears on track for confirmation. These leadership and composition changes will likely impact whether and how the FTC pursues cases against AI technology providers.
For example, Commissioner Ferguson strongly dissented from the FTC’s complaint and consent agreement with the company that created the “AI writing assistant,” arguing that the FTC’s pursuit of the company exceeded its authority.
And in a separate opinion supporting the FTC’s action against the “AI lawyer” mentioned above, Commissioner Ferguson emphasized that the FTC does not have authority to regulate AI on a standalone basis, but only where AI technologies interact with its authority to prohibit unfair methods of competition and unfair or deceptive acts and practices.
While it is impossible to predict precisely how the FTC under the Trump administration will approach AI, Commissioner Ferguson's prior writings provide insight into the FTC’s future regulatory focus for AI, along with the focus in Chapter 30 of Project 2025 (drafted by Adam Candeub, who served in the first Trump administration) on protecting children online.
The impact of the new administration’s different approach to AI regulation is not limited to the FTC and likely will affect all federal regulatory and enforcement activity. This is due in part to one of President Trump’s first executive orders, “Removing Barriers to American Leadership in Artificial Intelligence,” which “revokes certain existing AI policies and directives that act as barriers to American AI innovation.”
That order repealed the Biden administration’s 2023 executive order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which established guidelines for the development and use of AI. An example of this broader impact is the SEC’s proposed rule on the use of AI by broker-dealers and registered investment advisors, which is likely to be withdrawn based on the recent executive order, especially given the acting chair’s public hostility toward the rule and the emphasis on reducing securities regulation outlined in Chapter 27 of Project 2025.
The new administration has also been outspoken in international settings regarding its view that regulating AI will give advantages to authoritarian nations in the race to develop the powerful technology.
State Attorneys General Likely to Take on Role of AI Regulation and Enforcement
Given the dramatic shifts in direction and focus at the federal level, it is likely that short-term regulatory action will increasingly shift to the states.
In fact, state attorneys general of both parties have taken recent action to regulate AI and issue guidance. As discussed in a previous client alert, Massachusetts Attorney General Andrea Campbell has emphasized that AI development and use must conform with the Massachusetts Consumer Protection Act (Chapter 93A), which prohibits practices similar to those targeted by the FTC.
In particular, she has highlighted practices such as falsely advertising the quality or usability of AI systems or misrepresenting the safety or conditions of an AI system, including representations that the AI system is free from bias.
Attorney General Campbell also recently joined a coalition of 38 other attorneys general and the Department of Justice in arguing that Google engaged in unfair methods of competition by making its AI functionality mandatory for Android devices, and by requiring publishers to share data with Google for the purposes of training its AI.
Most recently, California Attorney General Rob Bonta issued two legal advisories emphasizing that developers and users of AI technologies must comply with existing California law, including new laws that went into effect on January 1, 2025. The scope of his focus on AI seems to extend beyond competition and consumer protection laws to include laws related to civil rights, publicity, data protection, and election misinformation.
Bonta’s second advisory emphasizes that the use of AI in health care poses increased risks of harm that necessitate enhanced testing, validation, and audit requirements, potentially signaling to the health care industry that its use of AI will be an area of focus for future regulatory action.
Finally, in a notable settlement that was the first of its kind, Texas Attorney General Ken Paxton resolved allegations that an AI health care technology company deployed its products at several Texas hospitals after making a series of false and misleading statements about the accuracy and safety of its AI products, including error and hallucination rates.
As AI technology continues to impact consumers, we expect other attorneys general to follow suit in bringing enforcement actions based on existing consumer protection laws and future AI legislation.
Moving Forward with Caution
Recent success by plaintiffs, combined with an active focus on AI by state regulators, should encourage businesses to be thoughtfully cautious when investing in new technology. Fortunately, as we covered in our chatbot alert, there are a wide range of measures businesses can take to reduce risk, both during the due diligence process and upon implementing new technologies, including AI technologies, notwithstanding the change in federal priorities. Other countries – particularly in Europe – may also continue their push to regulate AI.
At a minimum, businesses should review their consumer-facing disclosures — usually posted on the company website — to ensure that any discussion of technology is clear, transparent, and aligned with how the business uses these technologies. Companies should expect the same transparency from their technology providers. Businesses should also be wary of so-called “AI washing,” which is the overstatement of AI capabilities and understatement of AI risks, and scrutinize representations to business partners, consumers, and investors.
Future alerts in this series will cover:
- Risk mitigation steps companies can take when vetting and adopting new AI-based technologies, including chatbots, virtual assistants, speech analytics, and predictive analytics.
- Strategies for companies that find themselves in court or in front of a regulator with respect to their use of AI-based technologies.