Part 1 - Section 230: 27 Years Old And Still In The Spotlight

Morrison & Foerster LLP - Social Media

Here at Socially Aware we talk a lot about Section 230, the section of the 1996 Communications Decency Act (CDA) that immunizes social media platforms and other online service providers from liability stemming from content created by their users or other third parties. Sometimes referred to as “the 26 words that changed the Internet” and often discussed as one of the most important pieces of Internet law, Section 230(c)(1) states “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Put another way, when online platforms publish content posted by their users, the platforms are exempt from liability arising from that content if they meet the requirements for the Section 230 safe harbor. For example, if a TikTok user posts a defamatory video about another person, that user may be liable for defamation, but Section 230 will generally provide immunity for TikTok.

Section 230 has generated significant controversy in recent years and again is making headlines, especially with the intense focus on generative AI and its potential consequences – both positive and negative – for individuals, businesses, governments, and society at large.

This summer, we’re launching a four-part series that will look at the origins of Section 230, examine some of the seminal cases that have interpreted and applied the statute, and explore the numerous legislative and judicial efforts to limit the scope of the law.

Section 230: Origins

Imagine if TikTok or YouTube were potentially liable for every item of content posted by their users. It would be hard to imagine social media or the Internet as we currently know it existing if that were the case. But that is where we were in the early days of the Internet, based on some early case law that we discuss below. Before getting into the cases, however, it will be helpful to understand how defamation liability for third-party content typically works. Generally speaking, the threshold question is whether the party disseminating that third-party content can exercise editorial discretion in doing so.

First, consider the example of a common carrier such as a telephone company or FedEx. When these companies are acting as common carriers, they are generally not liable for the third-party content that they disseminate. They cannot not disseminate the content, after all.

By contrast, take a publisher like The New York Times. It does exercise editorial discretion over what it disseminates. It reviews each piece of content it publishes and decides whether to publish it or not. And, as a result, it accepts liability for that decision. If a newspaper or other publisher repeats or otherwise republishes defamatory third-party content—e.g., in an op-ed piece or letter to the editor—the publisher will be subject to liability to the same extent as if it had created the content itself (subject to some nuances that we need not address for present purposes).

Somewhere in between we have distributors like bookstores or magazine stands. They exercise some editorial discretion in choosing what books and magazines to sell, so that threshold requirement is met. But, unlike a newspaper, bookstores and magazine stands do not review or read every line of every book and magazine they sell. And they certainly don’t have the ability to go line by line and remove content they disagree with. Therefore, when it comes to distributors (as opposed to publishers), the law generally imposes liability for third-party defamatory statements only if the distributor knew (or constructively knew) of the defamatory statement at issue.

Now consider online platforms that host third-party content. In some ways, they act, or can act, more like newspapers. They can create walled gardens, hire moderators, review and take down user posts they don’t like, and deny access to those whose content they don’t want to publish in the first instance. In other ways, at least some are more like the telephone company. They can choose not to hire moderators and review content, and instead simply let users post whatever they want. So, in these ways, they are more like common carriers. In some other ways, though, online platforms are more like bookstores. They distribute lots of third-party content and may, at least for certain kinds of content, exercise editorial discretion in doing so.

Given the various functions online platforms can and cannot realistically perform, in the early days of the World Wide Web it was unclear how to think about their liability for the third-party content they published (or distributed). And two early cases that faced this issue came to different conclusions, though both held that they could be liable. One analogized them more to bookstores and the other more to newspapers.

Pre-Section 230 Cases

In a 1991 case, Cubby Inc. v. CompuServe Inc., in the Southern District of New York, Cubby Inc. and Robert Blanchard sued CompuServe for libel, business disparagement, and unfair competition for hosting a forum with content created by Rumorville USA, a daily newsletter that published allegedly defamatory content about a competing online newsletter developed by Blanchard and Cubby Inc. The court held that online service providers were subject to standard defamation law for their hosted content and, although CompuServe did host defamatory content on its forums, CompuServe was merely a distributor, rather than a publisher, of the content. Therefore, CompuServe could only be held liable for defamation if it knew or had reason to know of the defamatory nature of the content. Because Rumorville USA uploaded its newsletter automatically and CompuServe made no effort to review the content on its forums, CompuServe could not be held liable.

This holding strongly disincentivized online platforms like CompuServe from reviewing or moderating their users’ content. Put differently, it incentivized platforms to operate more like a telephone company and less like a newspaper with respect to handling of users’ content.

Next, we have a New York Supreme Court case, Stratton Oakmont v. Prodigy Services, from 1995, which made things even more perplexing for online platforms. In this case, a user of Prodigy’s Money Talk bulletin board created a post that claimed that Stratton Oakmont (a securities investment banking firm) and its president committed criminal and fraudulent acts in connection with an IPO. In response, Stratton Oakmont sued Prodigy and the anonymous poster for defamation.

The court held that Prodigy was liable as the publisher of the content created by its users, and thus faced the same liability standard faced by traditional newspapers. Specifically, the court found that Prodigy exercised editorial control over the messages on its bulletin boards by posting content guidelines for users, using moderators to enforce the guidelines, and using screening software designed to remove offensive language – in other words, standard content moderation.

In terms of incentives, Stratton Oakmont was even worse than Cubby. Based on these cases, online platforms were incentivized to avoid exerting any effort whatsoever to moderate users’ content and try to make their platforms more pleasant. It was this perverse incentive that led directly to the enactment of Section 230, signed into law by President Bill Clinton in 1996.

Enactment of Section 230

As noted above, Section 230 immunizes online service providers from liability stemming from the publication, removal, and filtering of third-party content. Section 230(c)(1) is the broadest and most powerful section of the CDA and would have protected CompuServe and Prodigy from any liability for their users’ defamatory content, even if the platforms were aware of the content. CDA § 230(c)(1) states that an “interactive computer service provider” cannot be treated as a publisher or speaker of content provided by a separate “information content provider.”

But what exactly does this mean and how far does Section 230’s immunity extend? In our next installment, we will look at how courts have interpreted the statute, particularly considering the rise of social media, which did not even exist in its modern form when Section 230 was enacted.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Morrison & Foerster LLP - Social Media | Attorney Advertising

Written by:

Morrison & Foerster LLP - Social Media
Contact
more
less

Morrison & Foerster LLP - Social Media on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide