AI-Generated Child Sexual Abuse Material: How Companies Can Reduce Risk

Orrick, Herrington & Sutcliffe LLP

Companies with an online presence must be vigilant of current and proposed legislation aimed at protecting children online. With the growing use of artificial intelligence (AI), companies face an increasing and unprecedented risk of inadvertently possessing illegal material, particularly AI-generated child sexual abuse material (CSAM).

What is CSAM?

Child sexual abuse material is defined as any visual depiction of sexually explicit conduct involving a minor. The term “child pornography” is widely used, but legislators, law enforcement, and advocates increasingly use terms that emphasize the abusive nature of the content, such as “child sexual abuse material” or “child sexual exploitation and abuse imagery” (CSEAI). In the United States, efforts to combat CSAM online and most recently, the use of AI to generate CSAM, enjoy wide, bipartisan support.

What the Law Says

Under federal law, knowing possession of CSAM is a crime. Importantly, the law treats AI-generated CSAM the same as real-life CSAM. Specifically, federal laws prohibits:

  • Any visual depiction of CSAM that is “indistinguishable” from an actual minor engaging in sexual conduct.[1]
  • Visual depictions of any kind – including computer-generated images, drawings, cartoons, sculptures or paintings – that show a child engaging in sexual conduct if it is obscene or lacks serious artistic value.

The law does not require that a depicted minor actually exist. As a result, people and organizations risk criminal liability even if the CSAM they host does not depict an actual child.

What We’ve Seen from Regulators and What to Expect in 2024

We expect to see increasing discussion and push for online safety along with AI-focused legislation in the coming year, building upon momentum from last year.

  • In July 2023, Sen. Jon Ossoff (D-Ga.) and Sen. Marsha Blackburn (R-Tenn.) announced a bipartisan inquiry to protect children from AI-generated sexual abuse content. They called on Attorney General Merrick Garland and the DOJ to increase resources to prosecute cases involving AI-generated CSAM.
  • In September 2023, the National Association of Attorneys General asked Congress to “study the means and methods of AI that can be used to exploit children” and “expand existing restrictions on CSAM to explicitly cover AI-generated CSAM.”
  • In November 2023, a Congressional committee held a hearing on deepfake technology. Experts testified to the dangers of AI-generated sexual content, including AI-generated CSAM.
  • In December 2023, the U.S. Senate passed the REPORT Act, which would increase obligations related to reporting and preserving CSAM-related evidence. The House of Representatives is expected to consider the legislation in 2024.

How to Avoid Running Afoul of Federal Laws

  1. Ensure your terms of service and vendor agreements address CSAM.
    • Online service providers should make clear in their terms of service that they have zero tolerance for CSAM and that they will report any such material they detect to law enforcement.
    • Contracts with AI vendors should incorporate stringent clauses that outline their responsibility to prevent the generation or dissemination of CSAM.
      • This could include warranties, representations, and indemnification clauses specific to CSAM risks or, as applicable, compliance with the law.
      • It also could include contractual language permitting audits of the AI vendor’s practices and systems to ensure compliance with contractual terms and relevant laws, including third-party audit
  2. Prepare an internal reporting procedure.
    Companies should put in place a formal procedure, including technical protocols, to swiftly manage and report CSAM on their systems. Federal law requires interactive service providers to report CSAM on their systems to the National Center for Missing and Exploited Children (NCMEC) CyberTipline.[2]
  3. Consider how AI can be used to moderate content for CSAM.
    Companies using AI for content moderation can explore strategies that harness AI to prevent the proliferation of CSAM. For example, providers can train their AI on NCMEC or other known CSAM hash databases to quickly identify, remove, and report CSAM.
  4. Conduct vendor due diligence.
    Ensure AI vendors have robust content moderation policies and technologies. This includes reviewing their record in handling sensitive content and their compliance with CSAM-related legal standards.
  5. Monitor legislation.
    It has become increasingly clear that legislators are poised to tackle the growing concerns about online child safety raised by rapid AI advances. Congress is considering several bills focused on online child safety. Orrick will report future developments.

[1] Under 18 USC § 2258E(6), a “provider” within the scope of these CSAM reporting requirements is defined as any electronic communication service or remote computing service. An “electronic communication service” means any service which provides to users thereof the ability to send or receive wire or electronic communications (18 USC § 2510(15)); a “remote computing service” means the provision to the public of computer storage or processing services by means of an electronic communications system (18 USC § 2711(2)).

[2] Laws against virtual CSAM have not gone uncontested. In Ashcroft v. Free Speech Coalition (2002), the Supreme Court struck down the language of the then-current Child Pornography Prevention Action of 1996 which outlawed CSAM that “appears to be” or “conveys the impression of” a child engaging in sexual conduct. This forced Congress to change the language of the law to where it stands today: virtual CSAM that is “indistinguishable” from actual CSAM. With the advent and proliferation of AI-generated CSAM, the line between virtual and real CSAM is becoming increasingly blurred. 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Orrick, Herrington & Sutcliffe LLP | Attorney Advertising

Written by:

Orrick, Herrington & Sutcliffe LLP
Contact
more
less

Orrick, Herrington & Sutcliffe LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide