Teenage Kicks – The Socially Aware Year in Review

Morrison & Foerster LLP - Social Media

If there’s one thing that became crystal clear in 2025, it’s that the internet, despite being more than thirty years old, still acts like a teenager much of the time.

Not a charming one, not a misunderstood prodigy. A teenager who has grown three inches since last year, argues, sulks, insists they are an adult now, and keeps discovering new ways to break things. The internet is impulsive, socially powerful, technically brilliant, and unconvinced that rules written by adults in the 20th century should control it in the 21st.

Much of the technology law debate we covered this year came down to one question: how much responsibility should the grown-ups bear for keeping their unruly digital teen in check?

For years, the answer was relatively straightforward. Platforms were treated as speakers for some purposes, but as intermediaries for many others, and largely insulated from liability for user conduct by statutes from a different era. Section 230 sits at the center of that arrangement. So does a patchwork of laws that assume content moves predictably, influence is diffuse, and design choices are largely neutral.

That framework isn’t at risk because lawmakers suddenly decided to pick a fight with the internet, it’s under pressure because the internet is now less like a library and more like a parallel social structure. One that recommends, nudges, ranks, reinforces, and occasionally veers into dangerous territory. Algorithmic systems began producing effects that legacy legal categories were never designed to address, leaving courts and regulators scrambling to catch up.

Courts and legislatures are now grappling with that reality. Regulating speech directly raises familiar constitutional problems. Punishing outcomes without understanding systems produces perverse incentives. And once platforms began shaping user experience at scale, the legal question shifted from what was said to how it was delivered.

That shift explains why so much of 2025’s legal activity focused less on individual posts and more on product features. Recommendation engines. Engagement mechanics. Default settings. Safety controls. Plaintiffs reframed claims away from speech and toward design. Legislatures experimented with disclosures, warnings, age-based access rules, and duty-of-care language that sounds more like consumer protection than media regulation.

Very little of this was clean. Some of it betrayed lawmakers’ misunderstanding of online culture. But it reflected a real attempt to answer a hard question: when systemic design choices contribute to harm, where do you apply disciplinary pressure?

From Speech to Systems

The fights that mattered most in 2025 were not about any single post, but about the systems that decide what people see next. Who hosted it. Who edited it, promoted it, or refused to take it down. Section 230 made that framing workable by drawing a clear line between the speaker and the service that carried the message. That line still exists, but courts and litigants asked Section 230 to do far more work in 2025 than it was designed to handle.

Plaintiffs and lawmakers increasingly focused on whether harm flowed from the content itself or from the way platforms were engineered to deliver it. Algorithms, infinite scroll, frictionless sharing, and engagement-driven ranking were recast not as editorial decisions, but as intentional design choices with foreseeable consequences. That reframing often suggests circumventing Section 230 rather than confronting it head-on. Instead of treating platforms as publishers, these cases argue that platforms are more like product designers.

Some courts rejected the distinction outright, while others allowed claims to survive early motions long enough to test whether liability could be established. Defendants continued to invoke Section 230 successfully, particularly where claims required courts to evaluate the substance of user content or second-guess editorial decisions. At the same time, the statute no longer provides the kind of categorical safe harbor that defendants enjoyed in an earlier era. When claims were framed around defective design or inadequate safety controls, many courts were willing to listen.

It’s unsurprising that age-gating emerged as such a pressure point. It offered a way to address online harm without directly regulating speech, and a way to impose responsibility without declaring that platforms are publishers; all while chanting “we must think of the children!”

Who’s Looking After the Kids?

In practice, protecting minors online turns out to be far easier to agree on than to do.

In 2025, states leaned heavily into age-based regulation. Age-gating sounds modest. Reasonable. Surely it is not too much to ask that adult material stay away from minors, or that platforms know when they are dealing with children. The difficulty is that the internet does not recognize age the way the law does.

Verifying age online requires collecting potentially sensitive information from users who may not trust the entity collecting it. The more robust the verification, the greater the privacy and security risks. The lighter the touch, the easier it is to evade. Every approach creates tradeoffs that become more pronounced at scale, and the courts noticed. 2025 saw repeated constitutional challenges to age-verification and warning laws, with judges pausing enforcement while they evaluated First Amendment concerns, vagueness problems, and the burden imposed on lawful speech.

This is where product liability lawsuits quietly reshaped the debate. Did the platform assess risk? Did it implement reasonable safeguards? Did it account for how its design choices affect younger users?

That framing tracks the line courts have drawn between publisher liability and product design. A law that dictates what platforms may publish or moderate looks like speech regulation. A law that expects reasonable steps to prevent foreseeable harm looks more like product safety. Courts have been cautious about that distinction.

What was made clear in 2025 is that age-based regulation is not a side issue. It is a proving ground. Legislatures are testing how far they can go without triggering strict scrutiny, plaintiffs are testing whether failure-to-protect theories can survive early dismissal, and platforms are testing the limits of compliance.

Taking It Slow

Faced with rapidly evolving technologies and aggressive new statutes, courts spent much of 2025 doing what they do best: slowing things down.

Courts are not there to redesign the internet. They move incrementally, resolving the disputes before them under existing statutes and procedural constraints. What courts pointedly did not do in 2025 was declare Section 230 either obsolete or untouchable, despite a number of lawmakers calling for its end. Instead, judges focused on framing. Is the claim really about third-party speech, or about how a system operates? Does liability turn on evaluating content, or on design choices that function independently of any particular post? Would the requested relief change what a platform says, or how it is built?

Section 230 wasn’t the only thing gumming up the works in 2025. Age-gating took center stage in a host of legal actions. When states imposed aggressive warning or age-verification rules, courts often paused enforcement. Not as a rebuke to legislative intent, but as a recognition that moving too quickly risked constitutional harm that could not easily be undone.

Growing Up Takes Time

None of this produced clean resolutions in 2025, and that may be the most honest takeaway of all. Maturation is incremental, uneven, and it rarely produces clean endings. There’s no tech law equivalent of a Unified Field Theory that will offer an elegant, unassailable set of rules to govern everything.

This is what it looks like when a legal system confronts a technology that has outgrown its original assumptions. Not collapse. Not revolution. A series of adjustments, pauses, and course corrections that gradually redefine expectations.

Growing up takes time. The law does too.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Morrison & Foerster LLP - Social Media

Written by:

Morrison & Foerster LLP - Social Media
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Morrison & Foerster LLP - Social Media on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide