Elon Musk, through X Corp., has filed a landmark lawsuit in the Eastern District of California challenging California’s Assembly Bill 2655 (AB 2655), alleging it violates constitutional protections and federal law while imposing impractical burdens on online platforms. This case marks a pivotal moment in the intersection of technology, free speech, and government regulation. X Corp.’s complaint argues that AB 2655 undermines First Amendment protections, conflicts with Section 230 of the Communications Decency Act, and creates an unworkable framework that risks chilling free expression and stifling innovation.
At its core, AB 2655 is an attempt to address the dissemination of “materially deceptive content” during elections. Proponents of the law argue it is necessary to preserve electoral integrity in the digital age, where deepfakes and AI-generated content can spread misinformation at unprecedented speeds. However, X Corp. contends that the law is unconstitutional, overly vague, and imposes burdensome obligations on platforms that are fundamentally unworkable.
Here is the complaint: X Corp. v. Bonta
But if you don’t have time to flip through this monster….here is my breakdown.
Let’s start with a closer look at AB 2655.
California’s Assembly Bill 2655 (AB 2655), officially titled the Defending Democracy from Deceptive Digital Practices Act, is aimed at addressing the spread of digitally manipulated media and misinformation on large online platforms. Enacted to protect the integrity of elections, the law targets platforms with significant user bases in California, requiring them to take proactive and reactive measures against what the state defines as “materially deceptive content.”
The statute applies to “large online platforms,” which it defines as public-facing internet services, video-sharing sites, or social media platforms with over 1 million California users in the past 12 months. While the law seemingly focuses on election-related content, its obligations and potential consequences extend broadly, placing substantial burdens on platforms to regulate user-generated material within tight deadlines. Here is the breakdown of AB 2655:
- Materially Deceptive Content Definition: The term “materially deceptive content,” which encompasses digitally altered media—including deepfakes and other AI-generated content—that could mislead a reasonable person into believing it authentically represents a candidate, elected official, or election process.
Content qualifies as materially deceptive under the statute if it:
- Portrays a candidate for elective office as saying or doing something they did not say or do, in a manner likely to harm their reputation or electoral prospects.
- Portrays an election official as taking actions in their official capacity that could undermine public confidence in the election process or its outcomes.
- Portrays an elected official as influencing election outcomes in a way they did not actually do, undermining public confidence in the election.
This definition extends to media manipulated using digital tools, such as AI-generated content, where alterations mislead viewers about the authenticity of events, statements, or actions. Minor adjustments to content—like brightness changes or background noise removal—are excluded.
- Content Removal Requirements: Platforms are mandated to develop “state-of-the-art” systems to identify and remove materially deceptive content under specific conditions:
- The content must be flagged through a reporting mechanism.
- It must meet the law’s criteria for harm to a candidate’s reputation, electoral prospects, or public confidence.
- It must be posted during the designated election-related timeframes:
- 120 days before and through Election Day for content about candidates or elections.
- 120 days before and up to 60 days after an election for content about election officials.
Once content is determined to meet these criteria, platforms must remove it within 72 hours of receiving a report. If the same or similar content is reposted, platforms are required to proactively identify and remove it as well.
- Labeling Requirement for Deceptive Content: Content that does not meet the criteria for removal but is still deemed “materially deceptive” must be labeled. The labeling requirement applies to:
- Materially deceptive content posted outside the designated election periods.
- Election-related advertisements or communications that fall short of removal criteria but could still mislead viewers.
The mandated label must explicitly state:
“This [image, audio, or video] has been manipulated and is not authentic.”
In addition to this disclaimer, platforms must provide a clickable link to an explanation detailing why the content is considered deceptive. These labels must be prominent and clear to users.
- Reporting Mechanism: AB 2655 obligates platforms to create a user-friendly mechanism for California residents to report suspected violations. This reporting system must:
- Allow users to flag content they believe qualifies as materially deceptive under the statute.
- Generate a platform response within 36 hours of receiving a report. The response must detail whether the content was removed, labeled, or left unchanged, along with the rationale for the decision.
This reporting mechanism shifts significant responsibility to users while imposing strict response obligations on platforms.
- Proactive Monitoring Requirement: The law demands that platforms deploy “state-of-the-art” techniques to identify materially deceptive content before it is reported. This includes monitoring reposted or substantially similar content that has previously been removed under AB 2655. The proactive nature of this requirement effectively forces platforms to engage in ongoing surveillance and analysis of user-generated content at scale.
- Lawsuits can be filed by:
- California’s Attorney General.
- District attorneys and city attorneys.
- Private actors, including candidates for elective office, election officials, and elected officials.
These entities can seek injunctive relief or other equitable remedies to compel platforms to comply with the removal, labeling, or reporting requirements. Enforcement is asymmetrical: platforms can be sued for failing to remove or label content but are not protected if they over-censor lawful content. This imbalance creates strong incentives for platforms to err on the side of censorship.
- Exemptions: The statute provides limited exemptions for certain entities and types of content:
- News outlets: Online newspapers, broadcasters, and magazines are exempt, provided they clearly disclose that the content is manipulated and does not accurately represent real events.
- Satire and parody: Content explicitly recognized as satire or parody is exempt, though the statute does not clearly define what constitutes satire, leaving room for subjective interpretation.
- Self-referential deceptive content: Candidates who post manipulated media of themselves, provided they include a disclaimer stating the media is “manipulated,” are exempt from the removal requirements.
Key Deadlines Imposed by AB 2655:
- 36 hours: Platforms must respond to reports by users, explaining actions taken or justifying inaction.
- 72 hours: Platforms must remove or label flagged content if it meets the law’s criteria.
- Immediate Removal of Similar Content: Once content is removed under AB 2655, reposted or substantially similar content must be removed promptly during the election-related period.
Now back to the lawsuit….
X Corp.’s 65-page, 3-count complaint thoroughly outlines its grievances with AB 2655—claiming it is unconstitutional, conflicts with federal law, and imposes impractical and burdensome requirements on platforms like X.
Count I
In Count I of the Complaint, X Corp. alleges that AB 2655 violates the First Amendment of the United States Constitution and Article I, Section 2, of the California Constitution, both facially and as applied. At the core of this claim is the assertion that AB 2655 imposes unconstitutional prior restraints on speech, discriminates based on content and viewpoint, and compels platforms to speak in ways that undermine their editorial discretion.
The complaint alleges that AB 2655 mandates the removal or alteration of speech deemed “materially deceptive,” resulting in the suppression of speech before it is conclusively determined to be unprotected. By requiring platforms to remove flagged content within 72 hours and to monitor for “substantially similar” content, the statute effectively imposes a system of censorship. X Corp. argues that these provisions amount to prior restraints, which are disfavored under First Amendment jurisprudence due to their chilling effect on speech.
X Corp. also contends that AB 2655 discriminates based on content, viewpoint, and speaker identity. The statute applies only to “covered platforms,” exempting traditional media outlets like newspapers and broadcasters while targeting social media platforms like X. Furthermore, the complaint alleges that the law disproportionately impacts negative or critical speech about candidates or elections, while leaving positive commentary unregulated. This selective application, X Corp. asserts, constitutes viewpoint discrimination in violation of the First Amendment.
Finally, the complaint claims that AB 2655 compels speech by forcing platforms to label certain content with state-mandated disclaimers about its authenticity. X Corp. argues that this requirement infringes on their right to refrain from speaking, a right recognized in cases like Wooley v. Maynard and NIFLA v. Becerra. By mandating the inclusion of government-approved labels, the law compels platforms to endorse the state’s perspective on what qualifies as “materially deceptive content,” thereby altering the platforms’ editorial voice and violating their autonomy.
Count II
In Count II, X Corp. asserts that AB 2655 is preempted by Section 230 of the Communications Decency Act (“CDA”), a federal law that protects platforms from liability for third-party content and their moderation decisions. The complaint emphasizes that Section 230 was enacted to ensure the free exchange of ideas online and to prevent platforms from being treated as publishers of user-generated content.
Under Section 230(c)(1), platforms cannot be held liable for content created by third parties. However, AB 2655 imposes liability on platforms for failing to remove or label “materially deceptive content,” effectively treating them as publishers. X Corp. argues that this obligation directly conflicts with the protections afforded by Section 230–which explicitly prohibits holding platforms liable for third-party content.
Count III
In Count III, X Corp. contends that AB 2655 is unconstitutionally vague, violating the Due Process Clause of the Fourteenth Amendment and the First Amendment. The complaint argues that the statute fails to provide clear and objective standards, leaving platforms unable to determine what is prohibited and opening the door to arbitrary enforcement.
X Corp. identifies several problematic terms in the statute:
- “Materially deceptive content”: Defined as content that “would falsely appear to a reasonable person to be an authentic record,” this phrase lacks clear criteria for what qualifies as “reasonable” or “authentic.”
- “Reasonably likely to harm”: This standard, which evaluates the reputational or electoral impact of content, is inherently subjective and depends on speculative judgments about potential harm.
- “State-of-the-art techniques”: Platforms are required to use undefined methods to monitor and address deceptive content, forcing them to guess at what qualifies as compliance with the law.
The complaint alleges that this vagueness forces platforms to over-censor content to avoid liability, chilling lawful and protected speech. Additionally, the lack of clear standards enables arbitrary enforcement by state officials, creating a risk of politically motivated actions.