Last week, the Supreme Court ruled unanimously against the notion that online platforms that incidentally host terrorist-made content have aided and abetted terroristic violence. In asserting the contrary, parties in Twitter v. Taamneh — and those in its companion case Gonzalez v. Google — sought to gut the immunities provided by 47 U.S. Code § 230, better known as Section 230. The court ruled for Twitter and returned Gonzalez to the U.S. 9th Circuit Court of Appeals for further consideration in light of the Taamneh ruling. The court’s rulings preserved the law’s soundness as well as online platforms’ ability to offer useful, open, and free platforms for online speech.
From one angle, the Taamneh ruling seems like little more than the application of clear, existing legal precedent and common sense. Indeed, it avoids ruling on Section 230 itself, instead addressing the legal parameters of aiding and abetting. However, as Techdirt’s Mike Masnick observes, the court’s opinion “lays out all of the reasons why Section 230 exists: to avoid applying secondary liability to third parties who aren’t actively engaged in knowingly trying to help someone violate the law.” (READ MORE: The British Free-Speech Slippery Slope Has Become a Cliff)
One must pause to consider the precedent that platforms would have faced had the justices issued an inverse ruling. In that instance, most any platform where terroristic content existed — even inadvertently — would likely have faced liability for any violence those terrorists commit, no matter how far removed.
Section 230 shields providers and users of interactive computer services — big and small alike — from civil liability incurred by content other users post. The law is simple: If a person or platform is not the speaker of illegal speech, then they are not liable for that speech. For example, Under Section 230, the New York Times could face liability for content published under its name, but it has immunity for content readers post in its comments section.
By shielding providers from the undue costs of endless litigation, Section 230 promotes permissive yet responsible online content moderation. In its absence, platforms would, in effect, have two available moderation strategies to avoid lawsuits. One would be to allow absolutely everything, regardless of how potentially abusive or obscene. The other would be to aggressively moderate everything, severely restricting speech.
It is important to remember that Taamneh originated from tragedy. In 2017, Abdulkadir Masharipov, a man affiliated with the terrorist organization known as the Islamic State of Iraq and Syria (ISIS), murdered 39 innocents in an Istanbul nightclub. One victim’s family sued Twitter, Facebook, and YouTube’s parent company, Google, alleging that the platforms’ inability to prevent ISIS from posting content altogether amounted to aiding and abetting the terrorist group. Already, the plaintiffs’ case flounders. They “never allege[d] that ISIS used defendants’ platforms to plan or coordinate the Reina attack,” notes Justice Clarence Thomas, author of the court’s opinion. “[I]n fact, they do not allege that Masharipov himself ever used Facebook, YouTube, or Twitter.”
Instead, the plaintiffs sought to pin on the platforms more general culpability for ISIS’s violence. Such a theory equates to holding a grocery store liable on the grounds that gangsters are known to sometimes to shop there. Indeed, Thomas holds that to “aid and abet” requires the defendant “to take some ‘affirmative act’ ‘with the intent of facilitating the offense’s commission.’” Merely creating and operating a platform accessible to the public falls short of this threshold. “The fact that some bad actors took advantage of these platforms is insufficient to state a claim that defendants knowingly gave substantial assistance and thereby aided and abetted those wrongdoers’ acts,” he writes.
Foes of Section 230 increasingly paint various pictures of platforms’ core distributional functions as inherently promotional or editorial. Proponents of this theory frequently forget that platforms need not remain neutral toward user-generated content to enjoy Section 230’s protections. In Taamneh, in which the question appeared only incidentally, the plaintiffs alleged that such recommendations constituted active, substantial assistance to ISIS. Not so, the court held. “Viewed properly, defendants’ ‘recommendation’ algorithms are merely part of that infrastructure,” Thomas writes. “All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself.”
Taamneh and Gonzalez mark the vanguard skirmishes in what promises to be a contentious decade of internet-related litigation. Too many on both the right and left seek to corral the internet within the bounds of various narrow conceptions of online order.
Nonetheless, the internet’s initial decades, comparatively unencumbered by technocratic micro-meddling, have birthed a spontaneous order that enriches humanity greatly and makes mankind far freer. It has increased inestimably the individual’s capacity to access information, build community, do business, and communicate with far-flung fellows. Policymakers — and the judges who review their enactments — must not mar this progress with stifling and ill-considered legal regimes.
David B. McGarry is a policy analyst for the Taxpayers Protection Alliance.
Losing Our Freedom to the Digital Pinkertons
Fox News Should Have Gone the Distance