Synthetic Content, Real Regulations: Regulation of Artificial Intelligence in Political Advertising

Venable LLP
Contact

Venable LLP

As campaigns explore new ways to harness artificial intelligence, regulators are rushing to keep pace ahead of the 2024 elections. The explosion in generative AI has put pressure on lawmakers and advertising platforms alike to stay ahead of deepfakes, voice clones, and other political advertising that may deceive voters or spread misinformation, all while balancing the promise of “friendly” applications that increase efficiency and affordability in campaign tools.

But regulating AI in political communications poses unique challenges. What qualifies as deceptive advertising? Can deceptive uses of AI be banned, given the First Amendment’s special protections for political expression? Who is regulating AI-generated political ads, and who is responsible for enforcing any controls? Do advertising platforms have a role in policing the content?

Venable’s Political Law Practice Group is monitoring ongoing efforts to regulate AI in political advertising at the federal, state, and industry levels. The following highlights some of these efforts and the emerging trends.

Federal. AI is front of mind for the Biden administration and members of Congress across the political spectrum, but action on AI in federal elections has been incremental.

  • Congress. A bipartisan group of senators has introduced legislation that would ban the distribution of “materially deceptive AI-generated audio or visual media” about federal candidates and allow people who are the subject of fake ads to sue those responsible for creating or distributing ads, though not online advertising platforms where such ads are placed. But an outright ban raises constitutional questions, and Congress has not yet moved on the proposal.
  • Federal Election Commission. Last summer, the FEC announced it would move forward with a proposed rulemaking on deceptive campaign ads using generative AI. Existing campaign finance laws prohibit someone from fraudulently misrepresenting themselves as acting on behalf of a candidate or party in certain circumstances, and the proposal seeks to define AI-generated deepfakes as a form of prohibited misrepresentation. However, some commissioners have expressed skepticism that the current law authorizes the agency to prohibit someone from creating damaging AI-generated depictions unless the ad is pretending to be from that campaign. In other words, it doesn’t allow the FEC to ban someone from making a misleading ad about another candidate if it is clear someone other than the candidate made it. In any event, the FEC has yet to offer a proposed rule that would move the question forward, though the FEC’s chair has commented publicly that they hope to act by early summer.
  • Federal Communications Commission. The FCC has acted on AI in political advertising, issuing a declaratory ruling this month affirming that calls using AI-generated voice clones are a form of artificial or prerecorded voice subject to the requirements of the Telephone Consumer Protection Act. The ruling, which specifically cited instances of deceptive campaign call spoofing a candidate voice, will require callers using AI technologies to have express consent to contact recipients and provide identifying information about the caller, among other requirements.

State. States have authority to regulate the conduct of their own elections, and many states have already taken up the issue of AI in state and local election advertising. Five states have enacted laws, while at least 40 have pending bills working their way through current legislative sessions. Each state’s approach varies, but many include bans on using AI in a deceptive way, alone or in combination with carve-outs for ads that disclose the use of AI. An overview of some passed laws is provided below.

  • California (Cal. Elec. Code § 20010): Beginning in 2023, California prohibited the distribution of “materially deceptive” audio or visual media showing a candidate for office within 60 days of an election “with the intent to injure the candidate’s reputation or deceive a voter into voting for or against the candidate.” Images or audio are deemed to be “materially deceptive” if they have been intentionally manipulated in such a manner that a reasonable person would believe the content to be authentic and “have a fundamentally different understanding or impression of the expressive content” than if the viewer were seeing the unaltered version. However, any communication that includes a disclaimer stating that it has been “manipulated” is exempt from the prohibition.

A candidate may pursue injunctive relief to stop the publication and seek damages against any entity that distributes the media. Notably, the law expressly does not apply to any radio or television broadcasting station or cable and satellite television operators paid to distribute the media. That carve-out does not extend to websites or other online publications unless the publication states that the deceptive audio or visual material “does not accurately represent the speech or conduct of the candidate.”

  • Washington (Rev. Code Wash. 42.62): Washington State regulates “synthetic media,” defined as images, audio, or video of an individual that has been “intentionally manipulated with the use of generative adversarial network techniques or other digital technology” in a way that creates a “realistic but false image, audio, or video” in communications intended to influence voters, distributed within 60 days of an election. Like California, Washington looks to whether the depiction did not actually occur in reality and a reasonable person would have a fundamentally different impression from the original conduct. Also like California, Washington’s law states that including a disclaimer identifying the media as being “manipulated” serves as an affirmative defense against any enforcement.

If a candidate is impacted, they may seek to both enjoin distribution and bring an action for general or special damages. Note that the sponsor of the communication would not be the only target the candidate could pursue—also potentially liable are any media disseminating the content (such as “interactive computer service” providers, including entities like Facebook, Twitter, and Google). However, they would be liable only if they act to remove the synthetic content disclaimer or change the communication to include synthetic media (two fairly unlikely scenarios). Federally licensed broadcast stations were specifically exempt from liability.

  • Texas (Tex. Elec. Code Ann. § 255.004): Texas has banned creating a deepfake video and causing it to be distributed within 30 days of an election with the intent to injure a candidate or influence the result of an election. “Deepfake” is defined as a video that was created with the intent to deceive and appears to depict a real person performing an action that did not occur in reality.

The application of this statute appears to be limited—audio advertisements appear to be exempt from the statute, as are display ads. Furthermore, only the “creator” of the communication may be liable, meaning there is no liability for an entity that merely hosts or republishes the communication.

  • Minnesota (Minn. Stat § 609.771): Minnesota law prohibits disseminating or contracting to disseminate a deepfake 90 days prior to an election without the consent of the depicted person, with the intent to injure a federal, state, or local candidate or influence the result of an election. While the statute on its face appears to cover federal candidates, it is unlikely that it could be legally applied to such communications.

The statute allows prosecutors to criminally charge those that disseminate such deepfakes. Injunctions are also an available remedy, but they can be sought only by a limited number of persons, including the Minnesota attorney general, any county or city attorney, the depicted individual, or any candidate “who is injured or likely to be injured by dissemination.”

Self-Regulation. Finally, concerned about potential liability and reputational damage their businesses may face due to advertisers’ use of AI, some media companies have begun to implement internal mechanisms regulating the use of AI in political advertising. Several major online ad platforms now require a disclaimer on political advertisements containing realistic audio, images, or video that is digitally created or altered to depict a real person saying or doing something that did not occur.

*             *             *

The constitutionality and effectiveness of laws on AI in political advertising have not yet been tested, and it has yet to be seen whether we are seeing the leading edge of a new trend in regulation. First Amendment considerations have led bans on false political speech to be struck down, along with laws analogous to a ban on AI-generated content, like a federal law banning child pornography in which no real minors are depicted. Some also argue that well-established defamation laws already discourage inaccurate political advertisements, and additional protections are both unnecessary and chilling of political speech. But regulating AI technologies has some bipartisan appeal, and governments’ desires to protect voting rights and free and fair elections are a strong countervailing interest.

As officials grapple with these considerations, entities that create or distribute political advertising should stay attuned to developments in the regulation of artificial intelligence.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Venable LLP | Attorney Advertising

Written by:

Venable LLP
Contact
more
less

Venable LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide