What is Section 230?

Buckingham, Doolittle & Burroughs, LLC
Contact

Buckingham, Doolittle & Burroughs, LLC

 

The Internet makes it easier than ever to connect with people around the world, share ideas and information, and have their voices heard regardless of whether they are a single individual with limited resources or a massive corporation with money and influence. This connectivity is facilitated by innumerable platforms and websites that host user-generated content, such as social media platforms (like Facebook, TikTok, and Instagram), websites that allow individuals to create their own blogs (like WordPress, Weebly, or Medium), and other applications or websites that host user-generated content (like Google reviews, Wikipedia articles, or Change.org petitions).

However, it’s widely believed that these websites and the degree of free speech that they provide would likely not be possible had Congress not enacted Section 230 (47 U.S.C. § 230). Indeed, the enactment of Section 230 in 1996 is largely credited with creating the Internet as we know it, through facilitating innovation and promoting the open exchange of ideas.

You may wonder, how did a single federal law accomplish this?

The answer is simple: by limiting tech companies’ legal liability. Specifically, under Section 230 owners of websites and platforms – as well as moderators of those platforms – are immune from lawsuits arising from third-party publications, as well as decisions by the websites or platforms to moderate content in a manner they see fit. For example, if someone is the victim of a false and defamatory statement published on Facebook, due to Section 230, the victim can only sue the Facebook user who posted the statement, not Facebook. Likewise, Facebook could not be sued because it decides to remove certain problematic posts under its terms of service, but fails to act to remove other problematic posts. By ensuring that tech companies would not be “punished” through endless defamation litigation, the Internet grew into what it is today.

Today’s Landscape

However, Section 230 is not without controversy. Today, many question whether tech giants, like Google, Meta, and X (f.k.a. Twitter), should enjoy the (near total) immunity for what their users post and how content is moderated, particularly given the societal power these companies now hold. Politicians on both sides of the aisle have argued that “Big Tech” has abused the protections Section 230 affords and the law, at minimum, needs to be updated to meet the needs of the Internet today. Some have endorsed the extreme step of repealing the law entirely and, potentially, creating a regulatory scheme to address content moderation. Others say that changing 230 spells disaster for the Internet as we know it – if companies with massive volumes of posts are suddenly facing liability for every allegedly false statement, it is possible that many services will over-police content, diminishing free speech, or may abandon policing it whatsoever, allowing harmful content like hate speech or terrorist threats to proliferate and cause harm.

It is anticipated that the debates and controversies surrounding Section 230 will continue well into the future. Given the Supreme Court’s decisions this past term in Gonzales v. Google, LLC and Twitter, Inc. v. Taamneh, it appears highly unlikely that the Court will endeavor to limit the protections afforded by Section 230 on its own. So, unless or until Congress can devise and pass a suitable alternative or amendment to the law, Section 230 will remain the law of the land and will continue to impact matters related to Internet speech for millions of people every day.

In this article, the Defamation Practice Group at Buckingham explores the history, purpose, legal context, controversies, and practical application of Section 230. Understanding Section 230 is particularly important to our area of practice and can be helpful for those suffering from online defamation, harassment, or other harmful misconduct to understand, as it very much impacts how a victim can go about remedying the harm they have suffered.

Middle aged man wearing a blue shirt and glasses researches the history of Section 230.

The History of Section 230

Section 230 was enacted in response to the conflicting results in two cases that involved host liability for user-generated content: Cubby, Inc. v. CompuServe, Inc. (S.D.N.Y. 1991) and Stratton Oakmont, Inc. v. Prodigy Servs. Co. (N.Y. Sup. Ct. 1995). These cases were both decided when the Internet was still, relatively speaking, in its infancy and user-generated content was starting to become more popular through the availability of message boards and forums.

In the first case, Cubby, the defendant CompuServe was sued for libel because it hosted a forum where a user published an allegedly defamatory statement about the plaintiff. However, CompuServe did not moderate any of the content published by its users – it elected to allow anyone to publish whatever they wished and not remove forum posts based on their content. The court ultimately found CompuServe could not be liable because it was not moderating or reviewing any user-generated content. According to the court, because CompuServe had no knowledge of the allegedly defamatory statements published by the user, it could not be legally responsible for the publication.

However, several years later in Stratton Oakmont, Inc. v. Prodigy Servs. Co. (NY. Sup. Ct. 1995), the court found Prodigy – the host of an Internet forum similar to CompuServe – could be liable for the allegedly defamatory speech of one of its users because it elected to moderate content published on its forums. The “moderation” at issue was through posting content guidelines, enforcing guidelines through forum “leaders” (moderators), and using screening software that was intended to remove offensive language. The court likened Prodigy’s actions to the “editorial control” exercised by traditional print publications and, thereby, distinguished its decision from the Cubby decision several years prior. In other words, because Prodigy decided it would moderate some content posted by users, it would be liable for any content posted. Had Prodigy abandoned moderation, it could have escaped liability, just as CompuServe had.

Several legislators in Congress found the ruling against Prodigy problematic, as it would discourage moderation completely. Thus, they devised Section 230 an amendment to the recently passed Communications Decency Act, which facilitated the development of free speech on the Internet, while also allowing websites and platforms to create their own standards for moderating or policing content.

While portions of the Communications Decency Act, specifically relating to the prohibition of obscene or indecent content, were struck down on First Amendment grounds, Section 230 was maintained, allowing companies to create innovative services and platforms for user-generated content without fearing the imposition of liability for their users’ speech or the decision to moderate it.

Lawyer in a black suit writes notes on a pad of paper regarding Section 230.

Understanding Section 230

As referenced, Section 230 has two important components, which are both found in subsection (c) which states:

  1. Treatment of Publisher or Speaker
  • No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
  1. Civil Liability
  • No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

Section 230(c)(1) provides immunity for Internet platforms (or, legally speaking, those providing an “interactive computer service”) for most liability that would arise from content that other users publish to the platform. If I publish a libelous statement in a Facebook post about you, you can sue me, but you cannot sue Facebook, or if I publish a defamatory review about you on Google, you can sue me, but not Google.

Section 230(c)(2) provides immunization from liability for decisions related to content moderation. If one of TikTok’s “terms of service” is that the publication of “misinformation” is prohibited, and I publish a post on TikTok containing “misinformation” about you or your company, TikTok cannot be sued if they refuse to take down my post, regardless of whether the post violates the “terms of service.”

Controversies and Debates

As referenced, there are significant controversies and debates surrounding Section 230.

Some question whether the broad limitations of liability afforded by Section 230 are appropriate today, particularly when the impact of Big Tech – and especially social media – on everyday life could not have been contemplated. Notably, Section 230 was enacted in 1996, and about 40 million people used the Internet worldwide. By 2019, more than 4 billion people were online, with 3.5 billion of them using social media platforms. Likewise, in 1996, there were fewer than 300,000 websites; by 2017, there were more than 1.7 billion.

As stated by Professors Michael D. Smith and Marshall Van Alstyne in a 2021 Harvard Business Review article, “It’s Time to Update Section 230,” – since Section 230 was passed in 1996, we have seen the truly powerful impact of social media on society and learned “just how much social devastation these [Internet] platforms can cause” when used for improper purposes. Smith and Allstyne note that when Section 230 was drafted no one anticipated questions like: “To what degree should Facebook be held accountable for the Capitol riots, much of the planning for which occurred on its platform? To what degree should Twitter be held accountable for enabling terrorist recruiting? How much responsibility should Backpage and Pornhub bear for facilitating the sexual exploitation of children? What about other social media platforms that have profited from the illicit sale of pharmaceuticals, assault weapons, and endangered wildlife?” With the cost of social harm arising on platforms, some argue that change is necessary to enforce accountability for obvious harms while still providing sufficient protection and avoiding chilling speech.

Various reforms have been floated, from imposing a duty of care standard for platforms to maintain their safe harbors from liability (as Smith and Allstyne) to limiting the protections to non-monetized content only. The prevailing sentiment is that if Section 230 does not fit the needs of our modern world, the appropriate action would be to amend it or rewrite it, not repeal it.

Many warn that if the law is repealed or made toothless, websites and platforms may become overly cautious and take extreme steps of either removing large categories of certain content in their entirety or moderating content to a degree that the sites and platforms would lose functionality. An alternative, but also problematic issue, would be if platforms decide to avoid moderation of content entirely. Without moderation, sites could be overrun by content that is harmful – like hate speech – but the publication of which would still not expose platforms to liability due to First Amendment protections.

Section 230 in Practice

Discussions regarding Section 230, like those above, focus on the broad impact of the law and debates about balancing competing societal interests. While examples of how Section 230 generally operates are provided, these discussions rarely touch on common questions about how and why Section 230 may impact someone presently facing damages due to the publication of harmful content on a platform hosting user-generated content.

As legal practitioners who focus on the harms arising from the publication of content online, we are routinely faced with questions that implicate Section 230.

For example, “Someone wrote a false statement about me on a website or platform. Can I sue the website or platform?”

Generally, no.

This is precisely what Section 230(c)(1) was designed to combat. If a third party publishes the content, the website or platform cannot be sued. The immunity would extend not only to (e.g.) false and defamatory content but also to certain invasions of privacy or harassing content giving rise to emotional distress. These protections extend to content published by individual users on platforms such as:

  • Review sites, such as Google, Yelp, RateMDs, Indeed.com, Avvo, or RealSelf;
  • Social media platforms, such as Facebook, Instagram, TikTok, and X/Twitter;
  • Blog hosts/sites, such as Blogger, WordPress, or Medium;
  • Fundraising sites, such as change.org
  • Websites featuring pornographic content, such as Pornhub

In addition to the broad immunity large sites receive, individual moderators also cannot be sued for failing to remove content, pursuant to Section 230(c)(2). This would include not only persons who work for (e.g.) Facebook, but also someone who, for example, is the moderator of a community Facebook group.

Of course, you can always file suit against the person who published the content, but trying to go after the platform or an individual moderator will result in the dismissal of that defendant.

What if the harmful statement was published anonymously? Can I sue the platform then?

No. Even if the poster is anonymous, you cannot sue the platform for the harmful content published by that anonymous user. While it is possible to file a specific type of action – an action for “discovery” – against a website or platform, this does not get you to your ultimate goal: holding the poster accountable. An action for discovery only allows you to secure information from the platforms to help you find the right person to sue.

However, the better bet if you don’t know who published the content, is filing a John Doe lawsuit to unmask the anonymous poster. Once the perpetrator is discovered, you will substitute them for the “John Doe.” In these cases, you will interact with platforms to secure the information necessary to unmask the Doe, but they are not a defendant and the process is not adversarial.

Notably, even if you are unable to identify the John Doe you may still be able to get the content removed. While social media websites typically won’t listen to a user report that content is false and defamatory and, therefore, should be removed, they will typically honor a court order establishing that content is false and defamatory as a matter of law.

What if I don’t want to file a lawsuit? Is there any other way I can have the content removed?

There are almost always options for trying to obtain the removal of content from a website or platform outside filing of a lawsuit. Often, the best and first step to take is removing harmful content. It is Section 230 that enables sites to have these reporting mechanisms and make decisions about whether to remove certain content. While the site does not have to remove the content based upon a terms of service violation, sometimes they will and a problem will be resolved fairly quickly and easily.

You should also consider whether there are any other available options to address your problem, such as reporting a copyright violation if (e.g.) someone is using a selfie you posted to harass you or a photo you took of your business to impersonate your business account on a platform.

If content is published on, for example, a community Facebook page, you may want to reach out directly to the moderator with a request for the content to be removed and if you are polite and explain the circumstances appropriately, they may voluntarily act to help you.

It is important to emphasize that because of Section 230, sites and moderators are not required to take down content – except in very narrow circumstances – published by third parties, even if the content violates the site’s own terms of service. We often see clients grow frustrated when a content report or removal request is not honored by social media sites or other sites involving user-generated content and sometimes this leads to less effective communications with the platform’s team responding to the report/request, such as issuing demands that the site need not comply with. Remember, it is entirely up to the platform to grant or deny your request, so you want them to want to help you. Be professional, pleasant, and patient when communicating with platforms to put yourself in a better position.

Conclusion

Section 230 is an incredibly important law that has shaped the Internet and, therefore, our modern world immensely. As the Supreme Court recognized, the Internet is one of the “principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.”

It is anticipated that debates and controversies regarding Section 230 will continue for years to come. Even if the law is amended or reframed in the near future, as technology evolves, the law will likely need to be revisited repeatedly to keep up with the ever-changing social media landscape.

Understanding the impact of this law is helpful to anyone who engages with online platforms and websites that include user-generated content. If a problem arises with someone else’s posting, it is helpful to know how – under the current framework – you can seek redress and what avenues are not available.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Buckingham, Doolittle & Burroughs, LLC | Attorney Advertising

Written by:

Buckingham, Doolittle & Burroughs, LLC
Contact
more
less

Buckingham, Doolittle & Burroughs, LLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide