Social Links: Columbus to Sacramento: The Compliance Tour

Morrison & Foerster LLP - Social Media

A Gigantic Loophole in Ohio’s Age-Verification Law

Ohio’s new age-verification law for online adult content, which requires online platforms to limit certain users’ access to sensitive material, took effect on September 30, 2025. The statute, RC § 1349.10, asks covered platforms to do a lot. They must confirm that every user is at least 18, re-confirm this information every two years, and use geofencing tools that monitor a visitor’s location in real time.

That’s the rule on paper. The harder part is figuring out who actually has to follow it. The legislature carved out a long list of exemptions that includes media employees, mobile carriers, cloud providers, and, most notably, “interactive computer services,” as defined in Section 230. Because that federal definition covers most sites that host user-generated content, the exemption reaches many of the platforms most people would assume are the statute’s targets. Several major adult-content providers have already cited the carveout as a reason they do not intend to comply.

The result is a law with an ambitious compliance model and a very narrow set of entities that seem obligated to implement it. It is also unfolding against a national backdrop in which Section 230 has stayed intact, even as plaintiffs continue to test its limits. The Supreme Court recently declined to review a Ninth Circuit decision shielding Grindr from liability after a user was assaulted by men he met on the platform. Both courts agreed the claims arose from user-generated content, not platform conduct, and that framing continues to guide many Section 230 disputes. Your favorite blog covered this decision back in October.

Ohio lawmakers may try to refine the statute, but state leaders have already suggested that the meaning of the exemption will likely be sorted out in court. For now, the law sits in an unusual place. It asks for intensive age-verification requirements and dynamic location monitoring, yet its broad carveout leaves open the question of how much, if any, real-world results the state can expect.

California to AI: “Show Your Work”

California has been inching tech policy forward for years, usually with steady, incremental moves. This fall, the state stopped inching and started moving with purpose. Recently, Governor Gavin Newsom signed two bills aimed squarely at artificial intelligence, marking the state’s clearest attempt yet to shape how AI systems behave before they reach the public.

The first measure, SB-53, requires companies building certain AI “frontier” models to do something regulators have been hinting at for a while: show their work. Developers of these models must document how their systems perform, assess foreseeable risks, and put guardrails in place to prevent misuse. The statute doesn’t dictate frontier model architecture but does require evidence of control over harmful or deceptive outputs and a published safety framework describing those controls.

The second law, SB-243, addresses “companion chatbots,” or AI systems that look or sound like human beings and are designed to be capable of meeting a human user’s social needs. When a reasonable user could mistake such a bot for a person, platforms must disclose that fact clearly. The law also sets higher expectations for services used by minors or people at risk of self-harm, pushing companies toward more deliberate interface cues such as labels on customer-service bots, visual markers in chat products, or design features that make it clear when software is simulating empathy.

These laws represent a shift in emphasis. California is no longer reacting primarily to bad outputs; it’s targeting the design decisions that produce them. And because the state’s consumer market is just too large for companies to ignore, many developers will likely treat these requirements as national standards in the name of marketplace efficiency.

California is hardly alone in attempting to tackle the tricky world of AI. Federal agencies are sketching out their own approaches to model accountability, and states from Colorado to Utah are experimenting with age-verification rules, impersonation restrictions, and a patchwork of AI safety legislation. California’s new framework won’t settle every debate about transparency, liability, or enforcement, but does steer the conversation toward a more familiar regulatory shape.

History suggests California doesn’t usually stand alone in matters of tech policy for long, and AI policy shows no sign of becoming the exception.

Speaking of AI Law . . .

Synthetic media has moved from novelty to infrastructure. As we approach the end of 2025, deepfake image and voice models can replicate a person with startling precision, and the tools that once required a full-blown research lab now sit inside mobile apps. What hasn’t evolved nearly as quickly is the law. When a deepfake harms someone, prosecutors often reach for statutes written for another era (even as federal agencies warn about deepfake-enabled fraud and extortion) and discover the conduct does not cleanly fit traditional fraud, identity theft, or harassment.

The gap is most visible when a deepfake causes reputational or emotional harm but no financial loss. Without a clear statutory hook, prosecutors stitch together charges meant for forged passports or stolen Social Security numbers. Courts have been cautious about stretching those laws to synthetic likenesses.

In response, several congressional bills introduced in 2024 and 2025, including the TAKE IT DOWN Act, would create a federal offense for malicious deepfakes. Some focus on nonconsensual intimate imagery. Others target election interference. A few attempt to define “digital impersonation” broadly enough to cover future use cases but still leave room for satire, parody, or expressive work.

Drafting that balance is difficult. Deepfake bills often turn on the speaker’s intent, the realism of the output, and the likelihood of real-world harm. Each of those elements shifts with technology and context. A flexible statute can age well but puts more burden on investigators and courts. A narrow one risks obsolescence as soon as the next generation of tools arrives.

For companies deploying generative tools, this landscape favors conservative product design and clear disclosure practices. Many platforms are moving toward watermarking, provenance metadata, and labeling to reduce the risk of confusing synthetic content with authentic material. Those choices may not eliminate liability, but they help demonstrate good-faith efforts as Congress works toward a statute that fits the problem.

Whether lawmakers can strike the right balance remains an open question. Technology evolves at light speed. The law moves at congressional speed.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Morrison & Foerster LLP - Social Media

Written by:

Morrison & Foerster LLP - Social Media
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Morrison & Foerster LLP - Social Media on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide