Social Links: Robots, Rewinds, and a Rocky Mountain High

Morrison & Foerster LLP - Social Media

AI Legislation Makes for Unlikely Bedfellows

A new bipartisan bill introduced in the U.S. Senate would make it illegal for minors to use “companion” AI chatbots, with criminal penalties for systems that engage children in sexual or self-harm discussions. Dubbed the GUARD Act and co-sponsored by Senators Josh Hawley (R‑MO) and Richard Blumenthal (D-CT), this proposed legislation arrives after a year of state-level experiments aimed at curbing what lawmakers call “emotional manipulation by machines.”

The bill would require age verification, parental consent systems, and clear disclosures that users are speaking to an artificial intelligence. The Senate may be poised to draw the first federal boundary around a fast-moving sector that has so far relied on internal policing and compliance with state-level legislation.

For the past two years, states have been out front on this issue. California lawmakers approved measures targeting AI chatbots that simulate human relationships with children, while New York and Utah floated similar bills focused on age gating and data restrictions. Even where those proposals stalled or were vetoed, they established a template for treating conversational AI tools aimed at minors as a high-risk product. The new federal bill takes that model national.

If enacted, it would force AI developers to navigate overlapping state and federal frameworks, each with its own definitions of “child,” “consent,” and “companion.” It could also shift the liability conversation from content moderation to platform design.

The GUARD Act would mark the first federal attempt to define how “human” an AI system may act when a child is on the other end of the chat, a definition that could ripple through product design, risk models, and the next generation of online-safety law.

BONUS ROUND!

We asked three popular AI chatbots to summarize their “thoughts” on the legislation. Let’s hear from our robot overlords:

GEMINI – “Focusing solely on companion chatbots leaves critical issues like bias, misinformation, and national security completely unaddressed, proving that US AI regulation remains frustratingly incremental and reactive. Overall, it’s a necessary first step in a small, high-risk area, but far from a comprehensive AI framework.”

GROK – “This kills American AI edge. China laughs, builds unrestricted bots, dominates. xAI? We’ll fight this crap—truth over safetyism. Kids deserve uncensored tools to grow brains, not bubble-wrap. Parents: Lock your damn browsers. Senators: Stay in your lane.”

CHAT GPT – “Legislating technology out of anxiety rarely produces durable or balanced policy. Protecting minors is essential, but outright bans or criminal penalties for design flaws could freeze innovation and punish responsible developers along with bad actors. The smarter move would be to require transparency, safety testing, and opt-in parental controls—rules that scale with risk rather than fear.”

(At least Gemini had the decency to respond without including an em dash.)

Don’t Mess with Texts

Texas is tightening the rules on marketing messages. In early September, Senate Bill 140 expanded the state’s Telephone Solicitation Act to cover text-based promotions. The statute, once aimed at telemarketers with headsets and call lists, now applies to anyone sending commercial SMS or MMS messages to Texas residents.

The change is simple on paper but sweeping in effect. Any text that tries to sell, promote, or induce a purchase can qualify as “telephone solicitation.” Those who send them must register with the Secretary of State, post a $10,000 bond, and renew each year. That’s a heavy lift for independent marketers and small businesses that rely on texting to reach customers.

Texas acted while the federal picture remained blurry. The Telephone Consumer Protection Act still governs robocalls and autodialers, but years of litigation have left its reach uncertain. Florida and Oklahoma have already filled that gap with their own statutes. Texas followed suit, pulling text messages directly into its law and tying enforcement to the state’s Deceptive Trade Practices Act, giving both consumers and regulators new levers for action.

For national brands, the result looks less like telemarketing compliance and more like privacy compliance. A single campaign can trigger overlapping consent, timing, and registration rules across several states. Businesses relying on third-party SMS vendors will need to confirm not only that messages are compliant, but also that someone in the chain is properly registered and bonded. Plaintiffs’ lawyers are watching too: violations can invite both regulatory action and class litigation.

A law built for cold calls has evolved into a framework for digital communication, with text messages treated much like personal data. Businesses that treat SMS marketing as casual outreach may find themselves sitting squarely in the realm of regulated communications.

Be Kind, Rewind

A 1988 statute meant to protect video-rental records is once again dictating the rules of online data tracking. On October 22, in Cole v. LinkedIn Corp., a federal judge allowed a proposed class action against LinkedIn to move forward, rejecting the company’s attempt to dismiss claims under the Video Privacy Protection Act (VPPA). Originally passed after a Washington newspaper published Judge Robert Bork’s video-rental history, the VPPA now finds itself policing cookies and pixels instead of VHS tapes.

The plaintiffs allege that LinkedIn tracked users’ viewing history on its platform and shared that information without consent. LinkedIn countered that any viewing data was buried in what it referred to as a “voluminous morass” of digital information that no ordinary person could reasonably decipher. The court wasn’t persuaded. Under the VPPA, the question isn’t how difficult the data is to extract, but whether it can identify a person and reveal what they watched. The ruling follows a growing line of cases that treat digital engagement data as the modern equivalent of a Blockbuster receipt.

The decision highlights a broader issue. Courts are increasingly using legacy statutes to fill the gap left by Congress’s failure to pass a comprehensive, modern privacy framework. What started as protection for video renters now governs cookies, tracking pixels, and analytics tags. It’s part of a trend that’s seen plaintiffs’ lawyers revive decades-old laws—from the Wiretap Act to state consumer protection statutes—to regulate modern data flows. The compliance obligations for tech platforms are evolving not through new laws, but through reinterpretations of old ones.

Adapting analog laws to digital behavior is messy. With its built-in private right of action and statutory damages of up to $2,500 per violation, the VPPA has become a reliable vehicle for privacy litigators. Similar suits have targeted streaming services and media outlets that embed tracking tools capable of mapping user-viewing patterns. The result is a steady migration of the VPPA from 1980s video store to 21st century browser tab. The only thing missing is a smug teenage clerk judging your taste in film.

California: The Test Kitchen for Tech Regulation

California isn’t just home to companies at the vanguard of tech; it’s increasingly the state writing the rules for how that technology should behave. In October, Governor Gavin Newsom signed two bills that underscore the state’s growing appetite for shaping digital risk: one targeting artificial intelligence chatbots and another mandating mental health warning labels on social media platforms.

The first measure, often called the AI Chatbot Disclosure Act, requires conversational AI systems to tell users they’re talking to a machine whenever “a reasonable person could believe otherwise.” It also pushes developers to build safeguards for users who express self-harm or suicidal thoughts. The law takes effect in 2026 and will likely force both enterprise and consumer-facing AI products to revisit their user-interface design and moderation workflows, though platforms already aligning with EU transparency rules may have a head start on the disclosure piece.

The second measure, AB 56, requires social media platforms to display mental health warning labels for minors upon login and then hourly after three hours of use. The goal is to remind young users that endless scrolling has documented mental health consequences. Whether these warnings meaningfully change behavior remains to be seen, but the statute’s enforcement mechanisms could create a new compliance category for “psychological risk disclosures.”

Together, these laws reveal how California’s regulatory strategy is evolving. Instead of banning technologies outright, lawmakers are focusing on disclosure and user awareness as primary levers and compliance tools. That approach mirrors earlier state experiments in data privacy transparency and algorithmic accountability, while sidestepping the constitutional issues that tripped up attempts at broader speech-based legislation.

The legislation may make California a de facto laboratory for digital risk policy in the United States. Companies deploying AI features broadly will soon need to decide whether to build separate California-compliant versions or adopt the state’s standards everywhere. Either path increases the likelihood that California norms will shape national practice.

It’s a small irony that a state known for inventing the attention economy now wants to regulate its side effects. But in the absence of federal action, Sacramento’s experiment may become the country’s template for how technology and law reconcile.

NetChoice Takes on the Centennial State

Colorado has entered the national conversation on youth social media regulation through a new requirement for on-screen health warnings. Under the Healthier Social Media Use by Youth Act (HB 24-1136), signed in June 2024 and set to take effect January 1, 2026, platforms must display pop-up notifications to users under 18 who spend more than an hour online in a 24-hour period or who use an app between 10 p.m. and 6 a.m. The requirement echoes elements of California’s Age-Appropriate Design Code, which similarly ties platform design obligations to youth well-being. The pop-ups must warn about the mental and physical health effects of excessive social-media use. Colorado’s Department of Education will also create a resource bank for schools and parents, underscoring the state’s view that youth online behavior is a public-health issue as much as a privacy concern.

In August 2025, NetChoice sued Colorado Attorney General Phil Weiser, arguing that the law violates the First Amendment by compelling private actors to broadcast a state-approved message. The complaint in NetChoice v. Weiser claims that the required pop-ups amount to compelled speech and points to the Supreme Court’s decision in 303 Creative LLC v. Elenis, which held that the government cannot force private parties to convey its message. NetChoice also challenges the law as unworkable in practice, requiring platforms to verify users’ ages, monitor activity, and generate time-triggered warnings that may raise both privacy and technical concerns.

A preliminary-injunction hearing was held on October 16, 2025, and the court’s decision is now pending. The outcome will determine whether the state can begin enforcing the law ahead of its January 2026 start date.

Colorado’s model sits within a broader wave of state-level regulation. Utah and Arkansas have enacted parental-consent and age-verification laws, both paused by federal injunctions. California’s Age-Appropriate Design Code Act was blocked last year on First Amendment grounds, while Florida and Texas are still defending laws that limit how platforms moderate content. Each reflects the same tension, with states treating social media use as a youth health issue and platforms insisting their design choices are a form of expression.

The Colorado case may help define how far states can go in forcing platforms to alter their digital environments for policy reasons. If the court allows the law to stand, warning-label mandates could spread quickly. If it strikes the statute down, it will reinforce that even well-intentioned design rules may have constitutional limits.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Morrison & Foerster LLP - Social Media

Written by:

Morrison & Foerster LLP - Social Media
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Morrison & Foerster LLP - Social Media on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide