Assaults on Section 230 of the Communications Decency Act (the “CDA”)—which shields online platforms from civil liability for third party content on their services—are abundant these days. On October 15, 2020, FCC Chairman Ajit Pai announced that his agency, at the request of President Trump, will draft rules explaining when platforms’ efforts to moderate user-posted content will leave them exposed to potential liability. Two days earlier, Justice Thomas issued a scathing critique of the Court’s current interpretation of Section 230, arguing for a much more limited interpretation that would drastically narrow the liability shield.
Most of the discussion has focused on concerns relating to free speech, the spread of misinformation, and accusations of biases in moderation practices. However, the case in which Justice Thomas issued his statement demonstrates another important issue at stake—the ability of platforms to use privacy and information security screening tools.
Subsection (c)(2)(A) protects decisions to remove “objectionable” content made in good faith, while Subsection (c)(2)(B) protects software providers who give internet users the technical means to screen or filter such content. It is the latter provision that was at issue in Enigma Software Group USA, LLC v. Malwarebytes, Inc., which involved two companies that both provide software to enable individuals to filter unwanted or malicious content, such as malware. Enigma sued Malwarebytes alleging that Malwarebytes engaged in anticompetitive conduct by configuring its product to make it difficult for consumers to download and use Enigma products. In its defense, Malwarebytes invoked Section 230(c)(2)(B).
The Ninth Circuit had previously held in Zango, Inc. v. Kasperskey Lab, Inc., 568 F.3d 1169 (9th Cir. 2009), that providers of software filtering tools (like Enigma and Malwarebytes) were in fact protected by Section 230(c)(2) because those tools allowed users to block objectionable content, such as malware. The Zango court did not, however, address whether there were limitations on the provider’s discretion to declare online content objectionable.
The Ninth Circuit rejected Malwarebytes’ defense under Section 230, finding that “filtering decisions that are driven by anticompetitive animus are not entitled to immunity under section 230(c)(2).” 946 F.3d 1040, 1047 (9th Cir. 2019). The Ninth Circuit explained that, in passing the CDA, Congress wanted to encourage the development of filtration technologies, not to enable software developers to drive each other out of business. Accordingly, the Ninth Circuit found that this filtering function was not protected. The Supreme Court denied Malwarebytes’ petition for certiorari, in connection with which Justice Thomas wrote his statement advocating for narrowing the scope of Section 230.
The Ninth Circuit’s opinion and the Supreme Court’s denial of certiorari mark the first chip in the immunity armor for makers of malware software and other filters. Indeed, various cybersecurity experts, technology think tanks, and law and computer science professors submitted amicus curiae briefs in connection with the certiorari petition arguing that leaving the Ninth Circuit’s opinion intact would open the door to litigation against malware screening tool producers—and not just for allegedly anticompetitive behavior.
The Ninth Circuit’s decision, now left intact by the Supreme Court, could have a chilling effect on innovation of malware detection and filtration systems. Makers of these filtering and screening tools may now have to spend resources to assess litigation risks associated with developing software that identifies and quarantines threats. To minimize the risks and costs associated with litigation, these companies may begin to take a more conservative approach in identifying threats that might plausibly claim to be a rival. A more conservative approach that errs against classifying potential rival software as a threat is particularly problematic where malware already often actively disguises itself as legitimate software.
The data security implications could be significant. Malware detection and filtration systems must constantly keep up with the evolution of malware itself. These tools can alert users of certain potentially unwanted programs, which slow down the overall performance of the user’s computer and ultimately create additional access points for hackers. Likewise, malware detection and filtration systems are vital to businesses, which use these tools to protect company and customer data from hacker attacks that utilize malware—for example, ransomware. The privacy implications could also be significant as many individuals use filtration tools to help screen unwanted spam or content, the opening of which can lead to online tracking, placement of cookies, or other additional unwanted content.
While the recent assaults on the CDA’s liability shield widely focus on the First Amendment implications, as applied to actions by social media giants like Facebook and Twitter to filter and remove user content, an unintended consequence of these assaults could be an overall decrease in privacy and data security protections for us all.