With Election Season in Full Swing, Many States Legislate on Deepfakes in Political Ads – Broadcasters Among Those Caught in the Crosshairs

Brooks Pierce
Contact

Brooks Pierce

With November’s Election Day less than five months away and an onslaught of political ads already hitting the airwaves and social media feeds, many state legislatures have enacted or are considering passing legislation targeted at deceptive advertising created using generative artificial intelligence, commonly referred to as “deepfakes.” Although definitions of “deepfake” vary, a deepfake is “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” Consider a couple of examples: Last year, former presidential candidate Ron DeSantis’ campaign posted a social media video that included allegedly fake images of former President Donald Trump hugging Dr. Anthony Fauci. Then, in the run-up to the 2024 New Hampshire Primary Election, voice-cloned robocall audio meant to sound just like President Biden told voters not to vote in the Primary and instead to “save your vote for the November Election” when it would be more important.

With modern deepfakes employing a combination of advanced AI techniques, including models trained on datasets containing millions of samplings, generative AI can now present a lie in a manner that the human brain is particularly susceptible to. In addition to the clear danger that disinformation poses when someone believes the falsehood, psychologists suggest that deepfakes can leave an influential and lasting impression—even when the observer knows the deepfake is false. Lawmakers in many states are especially concerned over generative AI’s influence in the context of free and fair elections, and they are taking action.

The federal government also is well aware of the problems deepfakes can cause, illustrated by a regulatory proposal advanced by the Federal Election Commission, the introduction of several bills by Congress, and related hearings. On May 15, a bipartisan group of Senators released an AI “Roadmap” outlining a number of policy priorities; however, it did not include any specific proposals related to deepfakes in political ads. That same day, the Senate Committee on Rules and Administration advanced two bipartisan bills to regulate deepfakes in political ads. The “AI Transparency in Elections Act” would require disclaimers on political ads with deepfake images, audio, or video, and the “Protect Elections from Deceptive AI Act” would ban deepfakes depicting federal candidates in political ads. Senate Majority Leader Chuck Schumer expressed his desire to pass legislation before the November Election, but Senate Minority Leader Mitch McConnell reportedly fears that overly broad legislation may result in censorship of political speech.

While we wait and see if and when federal legislation ever becomes a reality, state legislatures are forging ahead. Before 2024, five states—California, Michigan, Minnesota, Texas, and Washington—had laws regulating deepfakes in political ads. As of May 17, 2024, 15 states have enacted such legislation (with a few more awaiting the governor’s signature), and several more have legislation pending. These state laws and proposals take varied approaches to the issue: the majority of bills do not prohibit deepfakes altogether; instead, most of the legislation mandates disclaimers on certain ads that feature deceptive deepfakes, imposing either civil or criminal penalties for violations, and many of the bills provide safe harbors from liability.  

Generally, state laws in this space are meant to target the individuals or entities who create and then seek to distribute fraudulent deepfakes for the purpose of influencing elections. But, as is common with any legislation, many of the deepfake bills, as written, may have unintended consequences. Specifically, many of the laws and proposed legislation can be read to ensnare broadcasters, frequently burdening broadcasters with difficult legal dilemmas and compliance responsibilities. Here’s some of what we’re seeing:

General Broadcaster Exceptions

Thankfully for broadcasters, some state legislatures have tried to exclude broadcasters from liability where broadcasters merely air political ads that include deepfakes. States have done so in divergent ways, such as express broadcaster exceptions and provisions that impose liability on the entity that created, or paid for, the ad. Wisconsin’s statute, for example, states “[n]o liability for a violation of this subsection shall attach to any person who is a broadcaster or other host or carrier of a video or audio communication . . . unless the person is a committee responsible for the communication.” Mississippi’s recently enacted law excludes a “radio or television broadcasting station . . . when the station or online platform is paid to broadcast any digitization prohibited by this section.” Texas’ legislation limits the statute’s application to a person who, with intent to injure a candidate or influence an election, “creates a deep fake video” and “causes the deep fake video to be published or distributed within 30 days of an election.”

Candidate Advertisements

The most glaring omission in some statutes is an exception for ads that are paid for by a legally qualified candidate and/or the candidate’s authorized campaign committee. Under federal law (47 U.S.C. § 315(a)), broadcasters are generally prohibited from censoring content of an ad that is paid for by a legally qualified candidate for public office and/or such candidate’s authorized campaign committee (this “no censorship” provision of federal law applies to ads paid for by federal, state, and local candidates). Even if, for example, a broadcaster knows that a candidate ad contains a deepfake, the broadcaster is, generally, required by law to broadcast such advertisement without modification. Several deepfake statutes—as well as proposals throughout the country—could conceivably apply to broadcasters and do not include an exception for candidate ads; in other words, many laws and proposals at the state level inherently have a “preemption” issue, insofar as they could be read to directly contravene federal law. On the other hand, some states, often thanks to the input of broadcasters and their advocates, expressly state that broadcasters are immune from any liability when airing a deepfake contained in an advertisement (i.e., a candidate ad) that is subject to 47 U.S.C. § 315.  See Idaho’s House Bill 664 and Alabama’s House Bill 172.

Good-Faith Requirements

Several bills exclude a broadcaster from liability for airing a deepfake in a political ad when the broadcaster has made a good-faith effort to establish that the depiction in such an ad is not a deepfake. Although many broadcasters are already undertaking efforts to discover deepfakes, the “good-faith effort” language is ambiguous and focused on undefined actions. These state laws (New York is one recent example) put the burden on the broadcaster to make a “good-faith” effort to detect deepfakes.  This is inherently tricky, as deepfakes are produced with the goal of deceiving the viewer/listener (making them nearly impossible to identify), and broadcasters often must act in time-sensitive situations and under a host of other legal restrictions.

Policy on Deepfakes

New Mexico’s House Bill 182, enacted into law in March 2024, takes a different approach to broadcaster liability. The statute does not apply to broadcasters when they are paid to broadcast the ad, if the broadcaster can show that it has disclaimer requirements (i.e., a policy that requires deepfakes to disclaim “in a clear and conspicuous manner” that the ad has been “manipulated or generated by artificial intelligence”) and the broadcaster provided the policy to each person or entity running ads on its broadcast.

State legislatures understandably are moving quickly to try to regulate generative AI, particularly as it can be used deceptively in efforts to impact elections. In their haste, however, lawmakers may be overlooking the unintended consequences of their legislation. Broadcasters should be cognizant of how the specific provisions in the recently passed statutes—and proposed legislation—obligates them in different manners.

Written by:

Brooks Pierce
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Brooks Pierce on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide