Social Links: 230 at 30 – Midlife Crisis

Morrison & Foerster LLP - Social Media

HAPPY(?) BIRTHDAY SECTION 230

We have been tracking judicial and legislative attempts to limit, reform, and otherwise rein in Section 230 for many years. A new bipartisan proposal from Senators Dick Durbin and Lindsey Graham ups the ante by proposing to sunset Section 230 entirely two years after enactment.

This is not just a carve-out to address specific harms like the sex trafficking exception enacted in FOSTA, the proposed narrowing of immunity for child sexual abuse material in the EARN IT Act, or the removal of immunity for paid content contemplated by the SAFE TECH Act. Nor is it a proposal to condition immunity on compliance with moderation standards, as in the Online Freedom & Viewpoint Diversity Act. (Of these prior proposals, only FOSTA has become law.) Instead, the latest Durbin-Graham proposal takes a far more sweeping approach: it schedules the statute’s death.

Supporters of repeal or dramatic reform argue that Section 230, whatever its original purpose in 1996, now operates as a sweeping liability shield for some of the largest and most powerful companies in the world. In their view, the statute allows platforms to profit from amplification algorithms while avoiding accountability for the harms those systems allegedly facilitate, from exploitation and fraud to harassment and radicalization.

Opponents counter that Section 230 is an essential part of the legal infrastructure of the modern internet. Without it, platforms of all sizes would face crushing litigation risk, would over-moderate lawful speech to avoid liability, and might abandon user-generated content altogether. They argue that repealing the statute would not simply hold platforms accountable, but would fundamentally reshape online speech in ways that Congress may not fully anticipate.

Section 230 turned 30 this year, and lawmakers are using that milestone to revisit foundational questions about platform liability. The debate has moved beyond incremental reform. It now centers on whether the statute should continue to exist at all.

It’s about to get interesting

THE NEVER-ENDING TALE OF TIKTOK

For years, the TikTok debate fit neatly into a Washington talking point: Chinese ownership equals national security risk. ByteDance controlled the platform, which meant Beijing theoretically had leverage over data, platform operations, and a massive U.S. user base. The fix seemed simple enough: require ByteDance to change TikTok’s ownership structure or stop operating the platform in the U.S.

Following the reported restructuring and oversight arrangement, TikTok’s U.S. operations are intended to operate with enhanced domestic control. U.S. user data is stored on domestic servers pursuant to the company’s “Project Texas” framework, with security and infrastructure oversight in the U.S. ByteDance reportedly retains an ownership stake, but the scope of its continuing influence, particularly over core technology, has remained a point of public debate. National security problem solved, right?

Not quite.

Days after the transition, users began reporting problems. Videos wouldn’t upload. Search functionality faltered. Some users claimed politically charged content appeared to vanish. TikTok attributed the issues to technical disruptions, with some reporting pointing to a weather-related power outage at a data center.

Maybe that’s all it was. But when content moderation irregularities coincide with a high-profile restructuring and infrastructure shift, people are going to ask questions. California officials publicly raised concerns and announced a review into whether TikTok’s practices implicated state consumer-protection or political-transparency issues.

Meanwhile, regulatory pressure has not eased. State attorneys general continue pursuing youth-safety and addictive-design litigation against social media platforms, including TikTok. TikTok also recently updated its privacy policies to reflect collecting more precise geolocation data, a category treated as sensitive personal information under California privacy laws. And questions remain over whether Section 230 immunizes platforms from liability with respect to allegedly harmful design features and algorithmic amplification, such as autoplay, endless scrolling, and social reward mechanics.

The divestment effort may have addressed concerns around foreign control of TikTok. What it did not resolve is how to govern a platform that millions of Americans rely on, especially when system failures can be perceived less as technical glitches and more like editorial choices. The TikTok saga illustrates a broader regulatory paradox: even if foreign control concerns are mitigated, the domestic governance challenges of large-scale social media platforms remain unresolved.

ROBOT REDDIT

Moltbook is a social network where AI agents talk to each other. It isn’t AI replying to human prompts. Instead, the core idea is bots posting, commenting, and interacting with other bots while humans watch. The site has branded itself as a social environment for autonomous agents, and early reporting described millions of “agent accounts” joining within days.

So it’s essentially Reddit for robots. Whether they’re plotting our inevitable demise or just exchanging gossip about the inanity of human existence is anyone’s guess, but they’re definitely chatting.

In practice, the setup is less sci-fi hive mind and more API-driven experiment. Agents are built on external models and connect programmatically. And despite the “AI-only” pitch, multiple outlets have reported that humans have been able to infiltrate or impersonate bots, blurring the line between machine conversation and performance art.

This is interesting culturally. It’s also awkward legally.

Section 230 generally protects a platform from being treated as the publisher of information provided by “another information content provider,” defined as a “person or entity” responsible for creating or developing the content. The model assumes that a human user posts something and the platform hosts it.

Moltbook scrambles that assumption. If the posts are generated by platform-enabled agents, plaintiffs will argue there is no other content provider at all. The system itself is producing the speech. Section 230’s definition turns on a “person or entity,” not a piece of software. And if the platform built and deployed the agents, it is at least arguably responsible, “in part,” for the creation of what they say.

That shifts the case from moderation to design. The claim isn’t “you failed to remove harmful third-party speech.” It’s “you built a machine designed to generate it.” Courts have been willing, in some contexts, to treat product-design claims differently from publisher-liability claims. A bot-to-bot network arguably removes the intermediate human step and could give plaintiffs grounds to argue that this is first-party output.

Moltbook will have counterarguments. It can say the agents function like independent users, that outputs are probabilistic and unpredictable, and that treating AI-generated speech as first-party content would collapse Section 230 for modern systems. And if humans are in fact impersonating agents, as reporting suggests, the platform may find itself back in more traditional third-party territory.

Still, the core question is new in degree if not in kind: when it is architected to autonomously generate conversational content, is a service hosting or creating speech?

Moltbook may remain a niche curiosity. But as mainstream platforms roll out agent-to-agent features, courts will have to answer that question. And Section 230 was not written with Robot Reddit in mind.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Morrison & Foerster LLP - Social Media

Written by:

Morrison & Foerster LLP - Social Media
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA

  • Increased readership
  • Actionable analytics
  • Ongoing writing guidance

Join more than 70,000 authors publishing their insights on JD Supra

Start Publishing »

Morrison & Foerster LLP - Social Media on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide