FEC Considers Possible Restrictions on AI and Deepfakes in Campaign Ads

Davis Wright Tremaine LLP
Contact

Davis Wright Tremaine LLP

Petition for Rulemaking asks the Federal Election Commission to clarify that existing law prohibits deliberately deceptive AI generated advertisements

On August 16, 2023, the Federal Election Commission (FEC) published a Notice of Availability of Petition for Rulemaking (Notice) seeking comment on whether it should initiate a formal rulemaking to clarify that existing federal election law and FEC regulations prohibit any deliberately deceptive use of Artificial Intelligence (AI) technology and "deepfakes" in campaign advertisements, unless such use is clearly satire or parody "where the purpose and effect is not to deceive voters." The Petition filed by Public Citizen in July asks the FEC to require ads with "fictitious actions and statements" to prominently disclose that the content is generated by AI and does not represent real statements or events. Although the FEC is seeking comment on whether to commence a formal rulemaking, it has not taken a position on the Petition and will not consider the merits until the comment period concludes.

Existing Federal Regulation Framework

The Petition maintains that the use of generative AI technology to "create convincing images, audio and video hoaxes" designed to manipulate elections through campaign advertisements, including the use of deepfakes, is not clearly prohibited by existing federal laws. "Deepfakes" are AI-generated audio, video, or other visual depictions and impersonations of someone that are manipulated and made to appear as convincing originals, making it difficult or nearly impossible for the average person to detect that they are not real.

Under the Federal Election Campaign Act, 52 U.S.C. 30124(a)(1) (Act), and the FEC's implementing regulations, 11 C.F.R. § 110.16, it is illegal for a political candidate to fraudulently misrepresent another political candidate, but the law and regulations do not explicitly ban the use of AI in carrying out the wrongful conduct. More specifically, candidates running for federal office, their agents, or employees are prohibited from misrepresenting themselves or any committee or organization under their control from speaking, writing, or acting for or on behalf of any other candidate or political party in a damaging manner, or from willfully or knowingly participating, or conspiring to participate, in any plan or scheme to do so. The Act and regulations also prohibit fraudulent misrepresentations, including participating in or conspiring to do so, for the purpose of soliciting campaign contributions or donations.

In short, existing law prohibits creating and disseminating fake videos, audios, or photos of another party's political candidate, and depicting them as saying words they never actually said or acting in ways they never actually acted, such as falsely portraying them making completely fabricated speeches or acting illegally. The Petition goes a step further to suggest that the FEC's regulations should be amended to expressly prohibit the use of AI technology to create altered deepfake content since that conduct is not clearly prohibited by the Act or the FEC's regulations. The Petition also asks that unless the deepfake content is clearly satire or parody and not intended to deceive voters, it should be required to include a "clear and conspicuous disclosure" that the "content is generated by artificial intelligence and does not represent real events."

Regulatory Challenges and Next Steps

As AI technology continues to rapidly evolve, changing existing regulatory frameworks to keep up with emerging technologies remains challenging. Creating and distributing fake content about political candidates intended to deceive voters poses significant risks where voter disinformation can influence elections. While prohibiting harmful tactics is important, the lawful and beneficial use of AI should not be restricted. For example, the First Amendment would protect the use of deepfakes where the content is clearly a parody or satire that is not intended to deceive voters. Although some states have passed deepfake laws, current federal law and regulations do not provide clear guidance on this issue. The AI team at DWT has closely followed deepfake legislation and provided an overview of California's legislation and past steps the federal government has taken to address deepfakes in campaign ads.

The current FEC rulemaking is an opportunity for commenters to suggest how amendments to the existing regulations should be framed to appropriately address fraudulent use of deepfakes in elections while ensuring that the regulations do not inadvertently limit political expression, opinion, or satire protected by the First Amendment.

Next Steps

Comments on the Petition for Rulemaking are due by October 16, 2023.

*Edlira Kuka, a recent graduate of Seattle University School of Law, is a Communications Law, Regulation, and Policy Manager at DWT.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Davis Wright Tremaine LLP | Attorney Advertising

Written by:

Davis Wright Tremaine LLP
Contact
more
less

Davis Wright Tremaine LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide