In the second of a five-part blog series drawn from Carrie Goldberg’s congressional testimony at a hearing entitled: “Holding Big Tech Accountable: Targeted Reforms to Tech’s Legal Immunity,” we explore the court’s dangerously broad interpretations of Section 230. 

§230 now apparently protects companies who racially discriminate in removing content. Consider also a recent decision granting full immunity to a company for recommending content by terrorists.
–Supreme Court justice Clarence Thomas on Section 230.

Over past 25 years, our courts have taken a rather narrowly written law which was intended only to prevent lawsuits against tech companies related to publication torts like defamation and metastasized it into shielding the most powerful companies in the world from responsibility for things like terrorism, genocide, child sexual exploitation, illegal firearms dealing, and stalking.

Courts expanded the law well beyond claims of defamation and obscenity, to also throw plaintiffs out of court if they claimed their injuries were caused by negligence, fraud, contract breaches from companies violating the terms of service agreements, discrimination in advertisements, and the product being detective.

Even statutory damages in our federal child pornography law is off-limits for survivors despite companies making a profit off their nude images.

An example of this pollution of 230 is our case against Grindr. Herrick v. Grindr is a civil lawsuit born from the urgent need for immediate help in a life-or-death situation. While the goal of most Section 230 cases—and litigations in general—is financial compensation for past injuries, Matthew Herrick’s suffering was ongoing. Matthew was the victim of impersonation by a vengeful ex-boyfriend on the dating app Grindr, who used the app to arrange sex dates with over a thousand men at his home and workplace, many of which were under the impression that Matthew wanted to role play rape fantasies. Matthew’s case was thrown out before we’d even gotten our foot in the door—the judge relied on Grindr’s “immunity” under Section 230.

Joining Herrick v. Grindr in the court’s broadening interpretation of section 230 are the following:

  • Dyroff v. Ultimate Software Group, Inc., 934 F.3d 1093 (9th Cir. 2019). The Court held that there was no material contribution when a website connected two users to each other based on the free-form comments they wrote on the site about their interest in heroin. A teenager died of fentanyl poisoning after the other user sold him fentanyl instead of heroin.
  • Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2020). The Court held that Facebook did not materially contribute to illegal content where its algorithm amplified terrorist content. Arranging and displaying third party user’s content to others was not material contribution.
  • Jane Doe No. 1 v. Backpage.com LLC, 817 F.3d 12 (1st Cir. 2016). There, sex-trafficking victims sued Backpage.com, a classifieds hub that (among other things) hosted online advertising for illegal commercial sex in the United States. Even though the plaintiffs had marshaled persuasive evidence that Backpage.com had adopted rules and practices that facilitated sex trafficking—from selectively removing postings discouraging sex trafficking and tailoring its rules to protect sex trafficking from detection to removing metadata on photographs—the First Circuit concluded that Backpage was entitled to Section 230 immunity. This led to a Congressional amendment.
  • Goddard v. Google, Inc., 640 F. Supp. 2d 1193, 1196 (N.D. Cal. 2009). There, the plaintiffs alleged that Google’s Keyword Tool suggested words to include in advertisers’ ads and often added words that resulted in false advertisements (such as turning the word “free” into “free ringtone” even though the advertised service would not be free). But the district court concluded that the Keyword Tool was a “neutral tool” that had immunity—even though Google had itself suggested the false phrases that advertisers had used in their ads.
  • Fields v. Twitter, Inc., 217 F. Supp. 3d 1116 (N.D. Cal. 2016). There, the plaintiffs argued that Twitter’s provision of accounts—which ISIS members then used to communicate with one another, recruit members, and spread propaganda—amounted to providing material support for terrorism. But the district court found Twitter categorically immune under Section 230. According to the court, giving ISIS members Twitter accounts was “publishing activity” for Section 230(c)(1) purposes because handing out accounts necessarily “include[s] decisions about what third-party content may be posted online.”