In a quest to quell some of the fury over the content that shows up on its website, Facebook has erred too far in the other direction — pulling often innocuous posts and infringing on users’ free speech, critics argue.
As a recent Motherboard piece pointed out, Facebook has evolved from a college-focused website created in the mid-aughts in which the primary concern was removing dick picks to a behemoth with some 2 billion users who create a content soup with what some would consider vile ingredients.
After artificial intelligence software sorts through billions of daily posts created in more than a hundred countries, some 7,500 moderators are tasked with removing the content that has been flagged for review. Over the course of a week, these moderators must sort through about 10 million posts, removing them for such content as nudity, violence, or hate speech. The sheer volume makes that job seems like an impossible task.
Facebook has taken it on the chin from both the left and right — to the right it’s perceived as having a liberal bias, to the left it’s perceived to have a conservative bias. It’s been accused of being the most popular dissemination platform for “fake news.” Given Facebook’s desire to maintain its user base it’s understandable the site would want to curate its own content to not tick off some customers into leaving.
Removing child pornography is an easy move, since it’s illegal and patently obscene, but what about speech that some — but not all — might find offensive? Many on the right would argue that Facebook banning Alex Jones’ Infowars was a justified move due to the false information the far-left Jones peddled. Many others would argue that posts about white supremacy are offensive and shouldn’t be allowed on Facebook. (Interestingly, the site makes a distinction between white supremacy and white nationalism — allowing posts about the latter.)
Would you consider the Declaration of Independence hate speech? Facebook’s AI did. When a small Texas newspaper named the Liberty Country Vindicator attempted to post sections of that sacred document on its Facebook page on the Fourth of July, the content was scrubbed by the algorithm due to a reference to “merciless Indian savages.”
Norwegian Tom Egeland had his Facebook account suspended in 2016. His offense? He had posted one of the most gripping photographs in history — the Pulitzer-prize winner “The Terror War,” which shows children that include a naked, 9-year-old Kim Phuc, running from a napalm attack in Vietnam.
In both cases, the humans at Facebook reversed the earlier decisions and restored order. But still, these examples are causes for concern. As Sam Wolfson wrote about the incidents in the Guardian, “these errors in censorship might appear trivial, but as an ever-increasing amount of internet usage takes place within a tiny number of social media sites, it is likely these kinds of challenging works or honest reflections of history will reach fewer people.”
Sarah T. Roberts, a scholar in commercial content moderation at UCLA, told Motherboard the moderation is designed to help prevent a public relations nightmare for the social media websites.
“The fundamental reason for content moderation — its root reason for existing — goes quite simply to the issue of brand protection and liability mitigation for the platform,” she said. “It is ultimately and fundamentally in the service of the platforms themselves.”
Larry Downes, project director for the Georgetown Center for Business and Public Policy, pointed out in the Washington Post that the First Amendment doesn’t provide a directive on the issue since its provisions on free speech only apply to government and not private enterprise such as Facebook.
Still, Daphne Keller, director of the intermediary liability project at Stanford’s Center for Internet and Society, told Downes that users of such social media platforms as Facebook, Twitter, and Reddit are asking those sites to establish “a moral code.”
“But we’ll never agree on what should come down,” she noted. “Whatever the rules, they’ll fail.”
Downes said that such platforms are attempting to find an impossible “Goldilocks zone of just-right content moderation,” but notes that “picking and choosing among good and bad speech is a no-win proposition, no matter how good your intentions.”
His advice? Don’t try.
“Don’t moderate, don’t filter, don’t judge. Allow opinions informed and ignorant alike to circulate freely in what Supreme Court Justice William O. Douglas famously called ‘the marketplace of ideas.’ Trust that, sooner or later, truth will prevail over lies and good over evil.”
Notice to Readers: The American Spectator and Spectator World are marks used by independent publishing companies that are not affiliated in any way. If you are looking for The Spectator World please click on the following link: https://thespectator.com/world.