- In May 2024, the Senate Rules and Administration Committee advanced three bills sponsored by Chair Amy Klobuchar (D-MN) on elections and AI out of committee.
- These bills include provisions that would prohibit deceptive AI content, enforce a labeling requirement for certain AI-generated election content, and direct the Election Assistance Commission to produce voluntary guidelines on AI for election administrators.
- Despite the advancement of the bills, their future remains uncertain due to disagreement among congressional leaders over the path forward on federal AI regulation.
On May 15, 2024, the Senate Rules and Administration Committee, chaired by Senator Amy Klobuchar (D-MN) held a business meeting where the assembled senators marked up three bills concerning AI and elections. All three bills were sponsored by Chair Klobuchar, and the May 15 meeting saw each of the bills advance out of committee. However, the bills did not pass through without controversy and dissention.
This week, we analyze each of the three bills. We also discuss statements made by supporters and detractors of the bills and consider what the advancement of these bills suggests about the future of federal AI policy.
See It and (Don’t) Believe It: The Rise of AI Election Misinformation
Chair Amy Klobuchar began the hearing by warning of the risk that AI poses to election integrity. “This is a ‘hair-in-the-fire’ moment and here’s why: AI has the potential to turbocharge the spread of disinformation and deceive voters. Whether you are a Republican or a Democrat, no one wants to see these fake ads or robocalls.”
For years, experts have been warning that content generated or modified by AI could be leveraged to mislead voters. With the emergence of powerful and commercially accessible generative AI tools in 2022, these theoretical concerns have become real. In advance of the 2024 presidential election, images, video, and audio doctored by AI have been deployed to target both Democratic and Republican voters.
To respond to these developments, multiple state governments have instituted bills either banning or requiring labeling on AI-generated election communications. Significantly, some of these measures have enjoyed broad bipartisan support. Additionally, multiple leading AI firms have affirmed their commitment to combat the spread of AI-generated election misinformation.
While Klobuchar acknowledged that state-level and private sector actors are taking concerted steps to address the threat of AI-driven election misinformation, she asserted that “We cannot rely on a patchwork of state laws and voluntary commitments.” To begin to address these concerns, Chair Klobuchar has introduced several bills on AI and elections, including the three under consideration during the May 15 Senate Rules meeting.
Banning Deceptive AI Election Content: S. 2770
S. 2770, the Protect Elections from Deceptive AI Act, would ban the distribution of “deceptive AI-generated audio or visual media” in the carrying out of a “Federal election activity” or in the exercise of election influence or solicitation of funding by a candidate for federal office.
This bill defines “deceptive AI-generated audio or visual media” as AI-generated content that would cause “reasonable person” having consumed the content to “have a fundamentally different understanding or impression of the appearance, speech, or expressive conduct exhibited in the image, audio, or video than that person would have if that person were hearing or seeing the unaltered, original version” of that media. This definition also includes AI-generated content that would lead a reasonable person to erroneously believe that it “accurately exhibits any appearance, speech, or expressive conduct” of the person depicted.
The bill provides exceptions for broadcasting stations airing deceptive AI-generated media as part of “a bona fide newscast…if the broadcaster clearly acknowledges through content or a disclosure…that there are questions about the authenticity” of the content. A similar exception applies to publications printing materially deceptive AI content, as long as those publishers include a clear content notice.
Under this act, individuals “whose voice or likeness appears in, or who is the subject” of a materially deceptive AI-generated media in contravention of this act can seek injunctive relief and damages.
Mandating Labelling for Certain AI-Generated Election Content: S. 3875
S. 3875, the AI Transparency in Elections Act, would mandate that certain political advertisements for federal elections containing content that is “substantially generated by artificial intelligence” contain a “special disclaimer” made in a “clear and conspicuous manner” indicating that the content was generated using AI.
The “general public political advertising” under consideration in this bill includes advertising that:
- Explicitly “advocates for or against the nomination or election of a candidate.”
- Refers to a candidate between the “120 days before the date of a primary election or nominating caucus or convention” and the date of the general election.
- Solicits a contribution for a campaign.
For an advertisement to be considered as “substantially generated by artificial intelligence” and therefore be under the purview of this act, the advertisement must be “created or materially altered using generative artificial intelligence.” This standard does not include content that “has only minor alterations by generative artificial intelligence” or content that “does not create a fundamentally different understanding than a reasonable person would have from an unaltered version of the media.”
Violators of this act would be subject to a civil money penalty not exceeding $50,000 for each covered communication made in contravention of the act.
Finally, the act would order the Federal Election Commission to prepare and submit to Congress a report that includes “an assessment of the compliance with and enforcement of the requirements” of the act and “recommendations for any modifications” to the act.
Equipping Election Administrators for the AI Age: S. 3897
S. 3897, the Preparing Election Administrators for AI Act, would require the Election Assistance Commission (EAC), in consultation with the National Institute of Standards and Technology (NIST), to “develop voluntary guidelines for the administration of elections that address the use and risks of artificial intelligence technologies.”
The completed report would be submitted to Congress, issued to state and local election offices, and available to the public. This report on “the use and risks of artificial intelligence technologies in the administration of elections” would cover the following four topics:
- The risks and benefits of using AI to conduct election administration activities.
- The potential cybersecurity risks of leveraging AI for election administration.
- How AI-generated information can impact the sharing of accurate election information.
- How AI-generated information can impact the spread of election disinformation “that undermines public trust and confidence in elections.”
Finally, the bill orders NIST and the EAC to submit to Congress a report on the use of AI technologies in the 2024 election by November 5, 2025. This report would include an analysis of “how information generated by artificial intelligence technologies was shared and the use of artificial intelligence technologies by election offices.”
Conclusion: AI Policy Juncture, or Another Dead End?
During the May 15 meeting, Senate Majority Leader Chuck Schumer (D-NY), a member of the Senate Rules Committee, strongly advocated for the passage of these pieces of legislation. “It’s fair to say that the 2024 election will be the first national elections held in the age of AI. Congress has a responsibility to adapt to this brave new world…If we’re not careful AI has the potential to jaundice or even totally discredit our election system…Our democracy may never recover if we lose the ability to differentiate between what’s true and what’s false.”
Not all Rules Committee senators concurred with Leader Schumer’s assessment of the bills. Ranking Member Deb Fischer (R-NE) stated that while she supported S. 3897, the Preparing Election Administrators for AI Act, the other two bills under consideration “miss the mark” on “addressing concerns” regarding the use of AI in elections.
“The issues surrounding AI and elections are complicated,” asserted Ranking Member Fischer. “We have to balance the potential for innovation with the potential for deceptive or fraudulent use. On top of that we can’t lose sight of the important protections our constitution provides for free speech in this country. These two bills do not strike that careful balance.”
As the bills were easily voted out of committee, it became clear that Fischer’s opposition did not pose an encumbrance to the advancement of the bills. However, the Ranking Member’s concerns may come to haunt the bills, as sufficient opposition to the bills along the lines enunciated by Fischer could prevent the bills from being signed into law.
The issue of AI election misinformation is an insightful case study into the challenges Congress faces in securing federal AI legislation.
Lawmakers on both sides of the aisle agree on the importance of ensuring election integrity in the AI age. State lawmakers and corporate actors have already taken significant steps to address the issue of AI election misinformation. But despite this unanimity of purpose, disagreements about implementation could wreck the chances of these bills becoming law. While federal lawmakers can agree on commissioning reports or non-binding guidelines, federal AI policy that is more substantial and prescriptive has and will continue to face long odds in Congress.
We will continue to monitor, analyze, and issue reports on these developments.