Deep Fakes in Elections (S.23) - Overview & Analysis

The bill aims to regulate the use of synthetic media (specifically deepfakes) in elections by requiring the disclosure of deceptive and fraudulent synthetic media within 90 days of an election.

The Details:

  • The bill defines "deceptive and fraudulent synthetic media" as any synthetic representation of an individual intended to mislead voters, harm a candidate's reputation, or influence the outcome of an election.

  • The bill mandates that any synthetic media that meets this definition must include a clear disclosure stating "This media has been created or intentionally manipulated by digital technology or artificial intelligence." Specific guidelines for the visibility and audibility of these disclosures are also outlined, depending on whether the media is visual or audio.

  • A "reasonable person" standard of evidence is outlined in the bill. Meaning that the content would deceive a reasonable person. This is intended to filter out any content that is obviously fake.

  • Synthetic media, as outlined above, is only restricted within 90 days of an election.
  • Candidates whose representations are misrepresented through deceptive synthetic media to seek injunctive relief.

  • An enforcement mechanism is provided by allowing the State's Attorney or Attorney General to take action against violations.

  • Radio, television, and streaming newscasts that regularly published general interest news content are largely exempted from this bill.

  • The bill establishes penalties for violations, with fines ranging from $1,000 to $15,000 based on the intent and prior offenses of the violator. The Attorney General can also provide injunctive relief prevent further violations of this legislation.
  • The bill would take effect upon passage.

The Good:

  • Puts protections in place to prevent deepfakes from deceiving voters leading up to an election.
  • Provides clear disclosures on AI-generated content.

The Bad:

  • Free speech concerns.
  • Potential for people sharing content on social media to inadvertently violate this law.

Analysis:

Twenty-one states have passed similar legislation and there is a growing recognition of the need to address the risks posed by deep fakes in elections. Protecting the integrity of elections and the democratic process is critical to maintaining our democracy and new AI-powered tools are making it difficult to determine what is real and what is fake.

One notable incident in Slovakia occurred where an audio deep fake circulated just before an election, suggesting a candidate wanted to rig the vote. The use of deep fakes in campaigns is concerning and almost certainly poses a risk to free and fair elections.

Other states have faced free speech challenges over similar pieces of legislation. Lawmakers will need to be careful when crafting the language to not violate free speech rights. This is one of the reasons why the bill only requires the labeling of deep fakes instead of outright banning the practice in elections.

Additionally, there is a risk that citizens or campaigns may inadvertently violate this law by sharing content on social media that they didn't realize was AI-generated. Revisions to the bill do offer some protections for persons who share content they didn't know was AI generated, but it is a "reasonable person" standard that may not be fully inclusive.

However, these are all navigable problems and some protection for campaigns against deep fake attacks would be welcome.

 

Current Status:

The Senate passed the bill via voice vote on March 20, 2025. The bill will now be reviewed by the House.

 

News coverage on S.23

Read the Bill

More bill summaries

Last updated: 3/21/2025

DISCLAIMER: Generative AI used to assist in the production of this report.

Showing 1 reaction

Please check your e-mail for a link to activate your account.

Donate Volunteer Reduce Property Tax Burden

connect

get updates