[co-author: Stephanie Kozol]*
On October 2, New York Attorney General (AG) Letitia James announced that, in accordance with the “Stop Hiding Hate” Act, social media companies are now required to report their content moderation policies to her office, with first reports due no later than January 1, 2026. This legislation, sponsored by Senator Brad Hoylman-Sigal and Assemblymember Grace Lee and signed into law by Governor Kathy Hochul, mandates that platforms operating in New York with more than $100 million in gross annual revenue must post their content moderation policies publicly, provide consumers with a contact to report violations of the policy, and submit biannual reports to James’ office concerning compliance.
Key Requirements:
- Public Transparency: Companies are required to publish their terms of service in clear, accessible language and provide contact details for user inquiries.
- User Reporting Mechanisms: Platforms must clearly describe how users can report violations of the terms of service, and provide contact information for doing so.
- Action and Response Details: Companies must explain what kind of activity they may take for posts that violation the policy, such as specify potential actions against policy violations, such as removing posts or deprioritizing their visibility.
- Biannual Reporting: Social media companies are required to submit reports twice a year to the New York AG’s office. These reports must include statements on their terms of service, including whether the policy defines terms such as hate speech, racism, extremism, and radicalization, among others. The company must also describe its policy and how it enforces it, such as how the company uses automated process and addresses user reports to address conduct that violates the policy.
- Data Disclosure: Reports must include data on the total number of posts flagged as potential policy violations, the number of posts acted upon, and details of actions taken, including removal, demonetization, or deprioritization.
Enforcement and Penalties:
Failure to comply with the requirements, including failure to post the content moderation policy, failure to include contact information to report policy violations, and failure to submit the biannual reports, can result in civil penalties of up to $15,000 per violation per day. The AG is also empowered to seek injunctive relief. There is no private right of action under the act.
Carve-Out:
The act expressly excludes internet marketplaces and other applications or platforms in which “interactions between users are limited to direct messages, commercial transactions, consumer reviews of products, sellers, services, events, or places, or any combination thereof.” It also does not apply to social media companies with annual revenues less than $100 million per year.
Why It Matters:
This act imposes additional compliance obligations on social media companies operating in New York. Companies must comply by publicly posting their content moderation policies and equipping their systems to capture data on policy application and user engagement, particularly concerning reports of potentially offensive conduct and how the company addresses them. Social media companies must demonstrate not only the existence of a content moderation policy but also adherence to it through objective factual reporting. Noncompliance could be deemed a violation of the act and an unfair or deceptive act. By mandating detailed reporting, New York is establishing a framework that may influence other states in tackling online hate and misinformation.
The act also serves as a reminder to all tech companies that the regulatory landscape is a dynamic patchwork of state-level legislation and regulation. It is essential for companies operating in the U.S. to develop robust compliance programs, work with experienced outside counsel, and train employees to understand their legal obligations in this increasingly regulated sector.
*Senior Government Relations Manager