In August, Federal Communications Commission Chairman Ajit Pai asked the public for its thoughts on an executive order initiated by Trump.
Trump’s executive order aimed to target the legal shield that internet companies rely on to protect them from liability for user-created content: The Communications Decency Act section 230.
The executive order (more info and full text here) focused on how platforms moderate content, and demanded the FTC, FCC and DOJ take actions against tech companies that censor far-right voices. Though we agree that the regulation of tech companies must change, as you might have gathered, our take is not quite the same. Common enemy (Big Tech), different tactic.
As Carrie explained on Twitter earlier this year, “There are two very distinct categories of criticism about section 230; conservatives claim social media companies that are biased against their political views over-censor far right content, while litigators like me say the people who are being horribly injured online deserve recourse.”
Whilst we didn’t agree with the executive order, Carrie urged the US Federal Communications Commission to see this executive order as a great opportunity to enact meaningful changes to Section 230 to protect victims of online stalking, harassment and abuse, (more about these changes below).
Carrie’s comment to the FCC in response to Trump’s executive order said: “As an attorney, I have represented hundreds of victims of online crimes — stalking, nonconsensual porn (aka “revenge porn”), sextortion, and child sexual exploitation. In my cases, and others like them, large tech companies argue that Section 230 protects them from being held liable for harms that happen on their platforms. Companies take no initiative to stop active abuses because there are zero consequences if they don’t. They have zero incentive to identify or prevent harm. And in one of my cases, the court even pointed to Section 230 as a logical reason for a tech company to not take action to help a crime victim actively stalked and impersonated, because the immunity from liability meant no action was legally necessary.”
She explained that “Big Tech monopolies have abused Section 230 as a license to allow the revenge porn, cyberstalking, sex trafficking, dissemination of child sexual abuse material, and criminal harassment on their platforms–the exact type of content that Section 230 was meant to stop.”
“The FCC should focus on reforming Section 230 legislatively to protect victims of online crimes, rather than getting swept up in partisan debates over political bias.”
“Section 230 must be reformed,” she said. However: “The FCC should focus on reforming Section 230 legislatively to protect victims of online crimes, rather than getting swept up in partisan debates over political bias.”
In order to restore Section 230 to its original purpose Carrie argued that we must:
- Allow Injunctive relief to help in emergency cases like Matthew’s where the plaintiff is suffering imminent harm.
- Limit immunity to only publication-related torts like obscenity and defamation. – It can be difficult to expect Facebook or YouTube to determine whether each post made on its platform is defamatory. However, determining whether someone is threatening violence, engaged in sex-trafficking, or offering to sell illegal drugs is far easier to detect.
- Section 230 immunity should not apply when there’s constructive notice of the specific harm and damages. – Courts have always ruled that distributors can be liable for content if “they knew or had reason to know of the existence of defamatory matter contained in the matter published.”[i] Nonetheless, courts have wrongly interpreted Section 230 to give absolute immunity, even if they have knowledge of the illegal and tortious content.[ii]
- Section 230 immunity should not apply to advertisements or state or federal criminal conduct the platform knowingly accommodates. There is zero reason victims of online pornography, sextortion, child sexual abuse, or cyberstalking should not have every right to hold liable the platforms that are minting money off the user engagement (i.e. advertising revenue) of their suffering. If misinformation, disinformation, or discrimination is in the form of advertisements those harmed should be able to sue. Social media companies are directly earning money from this kind of speech and have a duty to vet it for accuracy and legality.
“In sum, while I do not agree with the NTIA’s focus on political bias, the FCC should take this opportunity to enact more meaningful changes to Section 230 to protect victims of online stalking, harassment, and abuse.”
Massive thanks to Mark Epstein, Esq. for his collaboration on this comment.
More than 20,000 comments were submitted.
On September 18th, the American Economic Liberties Project (AELP) singled out Carrie’s comment and submitted a reply in support.
AELP argued that policymakers should revise the CDA Section 230 to make the legal framework underpinning the internet consistent with “longstanding American democratic principles of dispersed power, openness, and clear liability rules for product harm.”
Launched in February 2020, AELP is led by Sarah Miller, former Deputy Director of the Open Markets Institute and recognized as “one of the primary architects of the modern antitrust movement.” A non-partisan non-profit, AELP’s purpose is to harness the current momentum of the anti-monopoly academic ideology and figure out the concrete steps to actually dismantle monopolies.
In its Reply supporting Carrie’s comment, AELP explains that “As the internet has evolved, monopolistic corporations like Amazon, Facebook, and Google have used the law to avoid responsibility for everything from selling dangerous products to promoting discrimination to spreading conspiracy theories to facilitating scams, stalking and harassment.”
AELP then recommends that policymakers define liability shields not by whether an online platform offers access to a web server, but by how it makes money.
They argue “manipulation engines like Facebook use the law to avoid anti-discrimination rules, monopolists like Amazon use it as a shield to avoid liability for selling exploding batteries or hoverboards to customers, and reckless dating apps like Grindr enable stalking and harm by posing merely as a vessel for speech. These businesses act in this way because the law allows them to operate as ‘absentee owners,’ exploiting a legal framework that protects them from responsibility for how they make money.”
If a company or website generates revenue by selling behavioral advertising, travel services, data collection, enabling commercial transactions, or otherwise monetizing the transmission of content, AELP argue, Section 230 should not necessarily apply. Instead the company or site should be regulated by sector-specific rules that apply to that particular line of business, as well as general torts and state criminal law.
“Misinformation and conspiracy theories are often the product, not an accident.”
It goes on: “Two of the most powerful online platforms today, Facebook and Google, make the vast majority of their money from selling advertisements targeted using data gathered from a highly invasive set of surveillance architectures. Businesses pay Facebook and Google to serve ads to users. Facebook and Google encourage advertisers to continue using them and further encourage them with Facebook and Google’s ability to microtarget specific users based on the information the corporations have collected. Because most of their revenue comes from advertising, Facebook and Google also have an enormous motivation to keep users on their sites and services for as long as possible. As a result, the platforms tend to show users sensationalistic, outrageous, or otherwise addictive content so as to make their services more valuable to advertisers. The algorithmic choices and user interface decisions that curate articles, videos, or sales can lead users to divisive, inflammatory, abusive, hateful, or conspiratorial content and degrade users’ lives and communal discourse. “In a targeted advertising model,” Jeff Gary and Ashkan Soltani have written, “misinformation and conspiracy theories are often the product, not an accident.”
Why does this all matter? Because Section 230 prevents our clients who’ve been harmed through online platforms from getting justice in the courts. When companies don’t fear being sued, they don’t spend the money on keeping their products safe to prevent injuries and they don’t respond when users are injured. As lawyers, we need to be able to fight for justice for our clients and that includes suing the hell out of tech companies.
Read the full comment, and the AELP’s recommendations for policymakers, here.
NOW READ:
- Trump issued an executive order last week in response to Twitter placing a warning on tweets spreading misinformation and glorifying violence. Signed last Thursday, the rushed EO was a new challenge to the CDA230, a broadly misinterpreted law that protects tech giants from being held liable for what happens on their platforms. Read more on the blog here.
- Blog: WTF IS THE CDA230?
- Want to know more about what’s wrong, how it has very real consequences for victims, and what can be done to help? Carrie’s book, Nobody’s Victim, is the must-read.
Looking for advice? Our specialized team has never been more devoted to helping you! Contact us here or call 646-666-8908 to schedule a case evaluation.
Let’s stay connected!