South Carolina Joins the Age-Appropriate Design Wars
South Carolina’s Age-Appropriate Design Code Act was signed on February 5, 2026. It took effect immediately. Four days later, NetChoice sued.
The law applies to online services “reasonably likely to be accessed by minors” and imposes a duty of “reasonable care” to prevent defined harms, including “compulsive usage” and “severe emotional distress.” Covered platforms must assess how their recommendation systems, interface design, and engagement mechanics affect younger users, and must adjust accordingly. The law has no grace period to build a compliance program. The obligations attached the moment the governor’s pen left the paper.
NetChoice’s complaint frames this as more than a consumer protection law in a trench coat. When a state tells platforms to redesign features or reroute how content moves through a feed in order to prevent broadly defined psychological harms, the argument goes, it is regulating how expression is delivered. That, in NetChoice’s view, raises a First Amendment problem. The suit also asserts due process concerns, pointing to the law’s immediate effectiveness, and advances federal preemption arguments rooted in Section 230 of the Federal Communications Decency Act and the Children’s Online Privacy Protection Act (COPPA).
None of this is entirely new. Bipartisan anxiety about minors online has been driving a wave of state legislation for several years, and the constitutional arguments are becoming more familiar with each case. The central question remains whether a law like this regulates conduct or whether targeting recommendation systems and content delivery mechanisms inevitably burdens protected speech. Courts have been wrestling with that distinction in different procedural postures and different circuits, with different results. The Supreme Court’s recent decisions in NetChoice v. Moody and NetChoice v. Paxton sharpened that inquiry but left key questions about recommendation systems and platform design for lower courts to resolve.
What happens in Columbia will matter well beyond South Carolina. Other states are watching closely.
Germany Wants to Pull the Plug on Minors’ Social Media Access
Friedrich Merz has not been chancellor long, but he has already signaled a willingness to confront the internet directly. Germany’s new government is reportedly weighing a nationwide ban on social media use by minors, not another design code or risk assessment framework, but a straight prohibition.
That’s a pretty meaningful escalation.
Germany is not starting from zero. The EU’s Digital Services Act (DSA) already imposes significant obligations on large platforms, including systemic risk assessments, heightened protections for minors, and transparency requirements. The DSA is built around the tried-and-true mitigation strategy of identifying the harm, reducing the exposure, and documenting the effort. A categorical age-based ban rejects that approach in favor of stark simplicity. The question then becomes not how to make the product safer for younger users, but whether younger users get access at all.
Within the EU, any ban would need to withstand scrutiny under EU law and proportionality principle, even as member states retain authority over youth protection. Outside Europe, the geopolitical framing is unavoidable. The platforms most exposed are American companies or enterprises closely tied to U.S. capital markets. Merz is expected to engage with the Trump administration on trade issues. Adding sweeping restrictions on American technology companies to that agenda would not be a simple move, whatever its domestic rationale.
The United States has not enacted a federal equivalent to the DSA. Constitutional limitations like the First Amendment and Section 230 make categorical access restrictions difficult to sustain, which is why American efforts have tended toward design mandates and parental consent frameworks. Germany operates under a different constitutional structure.
If enacted, this would rank among the most aggressive youth-access restrictions in any major Western democracy. Australia has implemented similar measures. The broader signal is hard to miss. In some jurisdictions, incremental compliance is losing political credibility.
If mitigation fails to quiet the politics, prohibition becomes easier to sell.
Lights, Camera, Misconception
Actor Joseph Gordon-Levitt went to Washington in February to discuss AI, creativity, and online platforms. He is thoughtful and well intentioned. He is a fine actor. But while delivered with inexplicable confidence, his synopsis of Section 230 was mostly incorrect. Fame may open doors on Capitol Hill, but it doesn’t amend statutory text.
According to reporting from Techdirt, Gordon-Levitt described the statute as a sweeping immunity shield that allows platforms to evade responsibility for harmful content. That framing is familiar. It also only loosely tracks the statute.
As any practitioner working in this space would know, Section 230 does not immunize platforms for their own conduct. It does not protect federal criminal prosecutions. It does not bar intellectual property claims. It does not override federal privacy statutes. What it does is prevent platforms from being treated as the publisher or speaker of third-party content. That distinction is foundational. Remove it, and moderation decisions become potential lawsuits. Content removals become discovery fights.
While it does not lack opinions about Section 230, Washington sometimes lacks careful analysis. Apparently, celebrity testimony tends to amplify the former.
The timing makes the rhetoric more consequential. As your favorite blog recently reported, Senators Dick Durbin and Lindsey Graham have proposed sunsetting Section 230 entirely. In that environment, reducing the statute to “Big Tech immunity” is not just imprecise. It carries political force, and some might argue amplifies an existential threat to platforms used by millions daily.
Age-gating fights paint platforms as reckless designers. Verification rollouts trigger privacy panic. And Section 230 keeps getting recast as the villain in every online controversy, as if it were a corporate favor instead of a basic rule about who’s legally responsible for speech.
Meanwhile, courts are still sorting out where platform conduct ends and third-party content begins. But the good news is Joseph Gordon-Levitt released a sort of mea culpa by way of a twelve-minute(!) YouTube video entitled “Explaining My Section 230 Speech.”
One Last Thing . . .
Section 230 is not the only federal safe harbor under pressure. In a recent decision, McGucken v. Shutterstock, a court suggested that services engaging in pre-publication content review may face questions about eligibility for the Digital Millenium Copyright Act’s (DMCA) Section 512(c) protection. The ruling raises a structural question: how much proactive moderation can a platform undertake before it risks altering its safe-harbor posture?
We’ll take a closer look at that decision and what it could mean for pre-publication screening strategies in a forthcoming deep dive.
[View source.]