State Regulations of AI in Elections

McDonnell Boehnen Hulbert & Berghoff LLP
Contact

McDonnell Boehnen Hulbert & Berghoff LLP

The recent rise in generative artificial intelligence (AI) models has granted powerful tools to the public that enable the creation of realistic, yet fake, images, sounds, and videos. Though more than six months remain until the U.S. general election in November, these tools have already been used to create fake calls from President Biden and fake images of President Trump.

Motivated by the impact that such tools could have on an election, at least ten states have enacted statutes governing the use of AI models in electioneering. These states are California, Indiana, Michigan, Minnesota, New Mexico, Oregon, Texas, Utah, Washington, and Wisconsin (some of these statutes do not specifically mention or imply AI but are written broadly enough to cover use of AI). Moreover, four states, Michigan (bill 1, bill 2), Minnesota, New York, and Washington, recently enacted further statutes governing the use of synthetic media in elections.**

The statutes have a variety of proscriptions, ranging from requiring the disclosure of the use of AI models to creating the possibility of injunctions and criminal charges for the deceitful use of such models. While potentially powerful, many of these statutes require a showing for deceitful intent. Such a showing may prove difficult to establish and it may hinder effective use of these statutes to regulate AI-generated content especially if such content is released shortly before an election.

Generally, state statutes do not call for an outright ban on the use of AI models to generate images, sounds, or videos relating to candidates or political parties. However, nearly all statues require some form of disclosure to indicate that the content was generated using AI. An example is California Election Code § 20010, which prohibits the distribution of "materially deceptive audio or visual media . . . of [a] candidate with the intent to injure the candidate's reputation or to deceive a voter into voting for or against the candidate." This prohibition does not apply if the audio or visual media includes a disclosure, which states: "This [Image/Video/Audio] has been manipulated." Such a disclosure must "appear in a size that is easily readable by the average viewer" or "clearly spoken manner and in a pitch that can be easily heard by the average listener." Such disclosures are crucial, especially as AI-generated content becomes increasingly common and humans are less able to differentiate real images from those generated using AI.

Though each state's statute is different, a common trend across these statutes is the creation of disincentives for the misuse of AI-generated content in elections. Three states, Michigan, Minnesota, and Texas, establish criminal penalties for the misuse of AI-generated content in the context of an election. One such example is Texas Election Code Ann. § 255.004, which makes it a Class A misdemeanor to create and cause to be published or distributed "a deep fake video," defined as "a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality." The punishment for committing a Class A misdemeanor is a fine not exceeding $4,000 and/or a jail term not exceeding a year.

As noted, many statutes addressing the use of AI in elections has some form of intent requirement, generally an intent to deceive or affect an election. Such a requirement has the advantage of limiting liability in cases where the use of AI was not intended to harm an individual or affect an election. However, such a threshold can hinder effective use of injunctive relief, especially when content is created shortly before an election. Interestingly, neither New Mexico's statue (N.M. Stat. Ann. § 1-19-26.4) nor Indiana's statue (Ind. Code § 3-9-8-6) require intent for liability. Thus, both statutes regard misuse of AI during an election as a strict liability offense. But like other states, both allow the publication of election content generated using AI if the content has a disclosure acknowledging the use of AI in the production of such content.

So far, a relatively small number of states have taken steps to address potential misuse of AI during the upcoming 2024 election. These statutes could help curb misuse of AI in the generation of misleading video, pictorial, and audio content -- but only within those states. Further, only two of these states (Michigan and Wisconsin) are considered "swing states" that could end up deciding the election. Notably, none of these statutes would punish the developers of AI models, thus implicitly acknowledging the dual use nature of these models.

* Yuri Levin-Schwartz, Ph.D., is a law clerk at MBHB.

** For more information on state statutes governing the use of AI in elections, see "Artificial Intelligence (AI) in Elections and Campaigns".

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© McDonnell Boehnen Hulbert & Berghoff LLP | Attorney Advertising

Written by:

McDonnell Boehnen Hulbert & Berghoff LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

McDonnell Boehnen Hulbert & Berghoff LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide