Section 230 For Publishers: A New Paradigm?

By Krista L. Baughman

Rising government demands for censorship by social media platforms raise the question of who should by protected by Section 230: do the immunities only benefit social media giants like X/Twitter and Meta/Facebook? Or could they extend further, to new platforms created by other entities, such as nonprofit publishers like Environmental Progress? 

This article consists of a thought experiment. Could a publisher use third-party web content management system (CMS) services and/or a third-party hosting service (e.g., SquareSpace or WordPress) to operate a platform and still enjoy statutory protection? What would be the legal requirements necessary for a New Platform to qualify for CDA 230 immunity, based on the current state of the statutory and common law?

A New Publishing Paradigm?

1.     §230 of the Communications Decency Act (47 U.S.C. §230)

Section 230(c) of the Communications Decency Act provides “[p]rotection for ‘Good Samaritan’ blocking and screening of offensive material” as stated below:

(1) Treatment of publisher or speaker 

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of--

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or 

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).  

CDA 230 provides the following definition of its terms:

“interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions. 

“information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.

“access software provider” means a provider of software (including client or server software), or enabling tools that do any one or more of the following:

(A) filter, screen, allow, or disallow content;

(B) pick, choose, analyze, or digest content; or

(C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.

 2. CDA 230(c)(1)

A New Platform is likely to have immunity under § 230(c)(1)

There is an abundance of case law interpreting § 230(c)(1) to render a “provider of an interactive computer service” immune from liability for any content posted by its users on its platform. See, e.g., Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093, 1097 (9th Cir. 2019). However, this analysis has overwhelmingly been applied to large social media defendants (Facebook, Twitter, YouTube, Yelp, etc.). The question posed is whether a smaller platform, including one that uses a web content management system and/or a hosting service (like WordPress) to run its forum would also qualify as a “provider of an interactive computer service.” Our research strongly suggests that New Platform would be considered a “provider of an interactive computer service” sufficient to enjoy § 230 immunity.

 First, it is well-established that “§ 230 immunity [is] quite robust,” and courts have routinely “adopt[ed] a relatively expansive definition of interactive computer service.” Carafano v. Metrosplash.com, Inc., 339 F.3d 1119. 1123 (9th Cir. 2003). And the definition of “interactive computer service” set forth in § 230(f)(2) provides that the term includes “any information service…,” which supports the conclusion that the immunity was meant to extend to any and all services that “provide[] or enable[] computer access by multiple users to a computer server.” CDA 230(f)(2); see also Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003) (noting that § 230 “is concerned with providing special immunity for individuals who would otherwise be publishers or speakers, because of Congress’s concern with assuring a free market in ideas and information on the Internet.”).

 Second, the few cases that do address the issue of § 230 immunity for smaller platforms support our conclusion. See, e.g., Global Royalties, Ltd. v. Xcentric Ventures, LLC, 544 F.Supp.2d 929 (D. Ariz. 2008) (finding that www.ripoffreport.com, a website with an estimated monthly traffic of 13,542 visits compared to, for example, www.facebook.com, with an estimated 16.8 billion monthly visits, qualified as an interactive computer service under CDA § 230(c) and was therefore entitled to immunity); Whitney Information Network, Inc. v. Xcentric Ventures, LLC, No. 2:04-cv-47-FtM-34SPC, 2008 WL 450095 (M.D. Fla. Feb. 15, 2008) (reaffirming the categorization of www.ripoffreport.com as an interactive computer service); Carafano v. Metrosplash.com, Inc., 339 F.3d 1119 (9th Cir. 2003) (finding that Matchmaker.com, a website with only 12,100 estimated monthly visits, qualified for immunity under CDA § 230(c)); see also Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003) (finding that the production of an electronic newsletter qualified as an interactive computer service). Our experience with litigating §230 claims further supports the conclusion that courts will be liberal in defining what constitutes an “interactive computer service,” as this term has even been applied (in an unpublished opinion) to a school’s operation of its student website.

We also considered, and rejected, the possibility if the New Platform used a third-party content management service and/or third-party hosting service in connection with operating its platform (e.g. WordPress), that a court would consider those third parties to be the “provider(s)” since they, not New Platform, own the servers or host the content. Here too, the plain language of § 230(f)(2) seems to foreclose this conclusion because it only requires that an interactive service provider “provides or enables computer access by multiple users to a computer server,” without also requiring the provider to be the owner of the computer server. A New Platform that rents/ pays for server access to host its content online would, pursuant to a common sense understanding of terms, be “a service that provides information to multiple users by giving them computer access…to a computer server, namely, the servers that host its social networking website.” Klayman v. Zuckerberg, 753 F.3d 1354, 1357 (D.C. Cir. 2014). This interpretation is further supported by 47 USC 153 (24) (the section providing definitions of terms used in the Title of which CDA 230 is a part) which defines “information service” broadly as “the offering of a capability for…making available information via telecommunications, and includes electronic publishing…”.[1]

In short, our research reflects that a New Platform that provides a forum on which users may publish information is highly likely to be deemed a provider of an interactive computer service, sufficient to avail itself of immunity under § 230(c)(1), even presuming that the New Platform uses the services of a third-party CMS and/or host.

 

230(c)(1) immunity would render New Platform immune from liability based on content posted by users, and from New Platform’s decision to remove content and delete user profiles.

            § 230(c)(1) states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

 

            Cases nationwide have interpreted § 230(c)(1) to give platforms immunity not only for liability based on content posted by its users, but also for the platform’s decision to remove user-generated content or to delete user profiles. See, e.g., Federal Agency of News LLC v. Facebook, Inc., 432 F.Supp.3d 1107, 116 (N.D.Cal. 2020), citing Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1162 (9th Cir. 2008) (en banc). Courts have reasoned that being a “publisher” as used in the statute “involves reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content.” Federal Agency, supra, citing Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009). Indeed, the U.S. Supreme Court has noted that “from the beginning, courts have held that § 230(c)(1) protects the ‘exercise of a publisher’s traditional editorial functions – such as deciding whether to publish, withdraw, postpone, or alter content.” Malwarebytes, Inc. v. Enigma Software Group USA, LLC, 141 S.Ct. 13, *16 (2020), citing Zeran v. America Online, Inc., 129 F.3d 327, 330 (CA4 1997).

Thus, any “activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230.” Federal Agency, supra at 1116, citing Rommates.com at 1170-71. Section 230 “immunizes decisions to delete user profiles.” Federal Agency, supra at 1116, citing Riggs v. MySpace, Inc., 444 F. App’x 986, 987 (9th Cir. 2011). Furthermore, § 230 immunity extends to causes of action under both state and federal law, though the Ninth Circuit has not interpreted § 230 to grant immunity for causes of action alleging constitutional violations. Federal Agency, supra at 1116, citing Roommates at 1169 n. 24.

Indeed, (c)(1) immunity has been interpreted so expansively by courts nationwide, that the U.S. Supreme Court has weighed in with disapproval, suggesting in a non-binding context that (c)(1) has swallowed the rule laid out by Congress in (c)(2). See Malwarebytes, supra at *16-17 (“[t]aken together, both provisions in § 230(c) most naturally read to protect companies when they unknowingly decline to exercise editorial functions to edit or remove third-party content, § 230(c)(1), and when they decide to exercise those editorial functions in good faith, § 230(c)(2)(A)…[b]ut by construing § 230(c)(1) to protect any decision to edit or remove content…courts have curtailed the limits Congress placed on decisions to remove content…”). Thus, while it seems possible that SCOTUS will overturn the plethora of cases that read (c)(1) expansively, at the present moment, § 230(c)(1) appears to be all that a platform needs to enjoy broad immunity under the statute.

            To summarize: if a New Platform is held to be a provider of an interactive computer service, then under CDA 230(c)(1) it will be free to remove user-posted content and delete user accounts (“deplatform”) without legal liability, nor can it be sued for content posted by its users.

 

            There are a few important caveats to § 230(c)(1) immunity. First, the provider must have had no role in the “creation or development” of information posted by users. A person who helps “develop” unlawful content, (Long v. Dorset, 369 F.Supp.3d 939, 948 (N.D. Cal. 2019), or who “creates, authors, or otherwise materially contributes to publication such that the content is properly attributable to them,” (Phan v. Pham, 182 Cal. App. 4th 323, 326 (2010)), cannot claim § 230(c)(1) immunity. As an often-cited example, in Roommates.com, a roommate-matching website operator was not entitled to immunity where the operator posted a questionnaire that required disclosure of the subscriber’s sex, sexual orientation, and status with children, which the Ninth Circuit found to be the website’s own act rather than content developed solely by a user. Roommates, supra, at 1164.

            Further, § 230 does not immunize a defendant from constitutional claims, federal criminal claims, intellectual property claims, certain federal and state laws relating to sex trafficking, and certain privacy laws applicable to electronic communications.

 

3.  CDA 230(c)(2)

Assuming, as we do above, that a New Platform will be deemed a “provider” of an “interactive computer service,” it would also qualify for immunity under § 230(c)(2), which immunizes “any action [by a provider of an interactive computer service] voluntarily taken in good faith to restrict access to or availability of material that the provider…considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”[2] The Ninth Circuit has explained the difference between (c)(1) and (c)(2) immunity as follows:

 

“Subsection (c)(1), by itself, shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties. Subsection (c)(2), for its part, provides an additional shield from liability, but only for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider ... considers to be obscene ... or otherwise objectionable.” § 230(c)(2)(A). Crucially, the persons who can take advantage of this liability shield are not merely those whom subsection (c)(1) already protects, but any provider of an interactive computer service. See § 230(c)(2). Thus, even those who cannot take advantage of subsection (c)(1), perhaps because they developed, even in part, the content at issue, see Roommates, 521 F.3d at 1162–63, can take advantage of subsection (c)(2) if they act to restrict access to the content because they consider it obscene or otherwise objectionable. Additionally, subsection (c)(2) also protects internet service providers from liability not for publishing or speaking, but rather for actions taken to restrict access to obscene or otherwise objectionable content.” — Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1106 (Ninth Cir. 2009).

            In other words, § 230(c)(2) can offer a platform additional immunity beyond what (c)(1) provides, by shielding a platform from liability for censoring even content that it had a hand in creating, presuming the platform acted in good faith and the content meets the description set forth in the statute.

            As compared to cases discussing CDA 230(c)(1), there is relatively limited case law interpreting CDA 230(c)(2). The cases that do exist generally reflect judicial leniency in favor of platforms. The statute is focused on the provider’s subjective intent of what is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” (CDA 230(c)(2)), and ““does not require that the material actually be objectionable; rather, it affords protection for blocking material ‘that the provider or user considers to be’ objectionable.” Domen v. Vimeo, Inc., 433 F.Supp.3d 592, 603 (SDNY 2020), citing Zango, Inc. v. Kaspersky Lab, Inc., 2007 WL 5189857 at *4 (W.D. Wash. Aug. 28, 2007, aff’d, 568 F.3d 1169 (9th Cir. 2009).  

            In the Vimeo case, the court found Vimeo’s subjective intent to be apparent based on its Guidelines, which stated under a section entitled “How does Vimeo define hateful, harassing, defamatory, and discriminatory content?” that “[v]ideos that promote Sexual Orientation Change Efforts (SOCE)” are forbidden. Domen v. Vimeo, Inc., 433 F.Supp.3d 592, 603 (SDNY 2020). Based on the complaint’s allegations, the court found that plaintiff’s videos in fact promoted SOCE, and therefore that Vimeo subjectively found them to be “harassing” under its own terms.

            In a similar vein, the courts that have addressed the “good faith” issue seem to err on the side of ruling in favor of platforms. In Vimeo, the court held that “conclusory allegations” of bad faith are insufficient and rejected plaintiffs’ § 230(c)(2) argument where “[b]ased upon the allegations of the FAC, what occurred here is that Vimeo applied its Guidelines to remove Plaintiffs’ videos, since such videos violated the Guidelines. Plaintiffs do not include sufficient factual allegations regarding Vimeo’s alleged bad faith to ‘nudge[ ] their claims across the line from conceivable to plausible.’” Domen v. Vimeo, Inc., 433 F.Supp.3d 592, 603 (SDNY 2020) (quoting Bell Atlantic Corp. v. Twombly, 550 U.S. 554, 570 (2007)) (emphases added).

            Similarly, in Berenson v. Twitter, Inc., 2022 WL 1289049 (N.D. Cal. Apr. 29, 2022), the court found that Twitter did not act in bad faith in deplatforming a plaintiff who posted COVID-19 related content, because “Twitter constructed a robust five-strike COVID-misinformation policy and, even if it applied those strikes in error, that alone would not show bad faith. Rather, the allegations [of Twitter responding to plaintiff’s complaints about prior censorship] are consistent with Twitter’s good faith effort to respond to clearly objectionable content posted by users on its platform.” Id. at *2, citing Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1105 (9th Cir. 2009); Domen v. Vimeo, Inc., 433 F. Supp. 3d 592, 604 (S.D.N.Y. 2020).

            It is worth noting that in Berenson, although the Court dismissed plaintiff’s speech-related claims, it allowed claims for breach of contract and promissory estoppel to go forward where the plaintiff alleged that “Twitter, through its vice president…gave specific assurances to plaintiff that, among other things, it ‘would try to ensure you’re given a heads up before any [enforcement] action [under Twitter’s specific COVID-misinformation policy] is taken.’” Berenson, supra at *2-3. Accordingly, we should assume that if a New Platform’s managing agents engage in direct correspondence with users regarding what they can or cannot expect about enforcement of the platform’s guidelines or policies, such conduct can open New Platform up to potential liability for non-speech-related claims, regardless of § 230(c)(2) protection. (E.g., a statement to New Platform users that “nobody will ever get deplatformed for unpopular opinions here” could be problematic down the line when New Platform engages in content moderation.)

Finally, § 230(c)(2), like (c)(1), does not immunize a defendant from constitutional claims, federal criminal claims, intellectual property claims, certain federal and state laws relating to sex trafficking, and certain privacy laws applicable to electronic communications.

Conclusion

Our takeaway from these authorities is that so long as New Platforms maintain and posts clear guidelines or policies regarding what type of content is considered “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,” and enforces those policies with minimum consistency, it can enjoy the protections of CDA 230(c)(2) when it censors content, including content that New Platform itself had a hand in creating. But given the broad immunity of (c)(1) discussed above, it is likely that most courts will approach any CDA analysis under the rubric of (c)(1), regardless of how a plaintiff’s claims are pled.

 

End Notes 

[1] A final way we approached the interpretation of “interactive service provider” is more esoteric but could be used in a logic or policy argument. This approach considers that the final clause of CDA § 230(f)(2), which specifically includes in the definition of interactive service provider “a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.” At least one case we have found (albeit in a copyright context) suggests that where a third-party vendor provides both CMS and hosting services to a website, the third party is seen as a virtual “library.” See, e.g., Clean & Sober Media LLC v. Renew Counseling Center of NC, LLC, No. 5:20-CV-00252-M, 2021 WL 2056985 (E.D.N.C. May 21, 2021) (a WordPress server on which a website was hosted was referred to as a “library”). Accordingly, an analogy can be made whereby a New Platform would qualify as an interactive service provider under CDA §230(f)(2), because the New Platform would be giving users access to a system (the website) operated by a library (WordPress). Concededly, when the statute was passed in 1996 it is unlikely that the drafters were contemplating the term “library” to include digital media, but conceptually the analogy may be persuasive to a judge. 

 

[2] The second subparagraph of CDA(c)(2) further immunizes action taken by a provider to enable or make available to users “the technical means to restrict access to material.” That is not our current understanding of how a New Platform would work, but this additional protection is potentially available.

 

Michael ShellenbergerComment