Content Moderation

new chapter from A Field Guide to Social Media

Over the course of the next few months we’ll be treating this newsletter more like a Substack, sharing drafts from Chand and Ethan’s forthcoming book “A Field Guide to Social Media,” coming next year with MIT Press.

Do you have thoughts about this chapter? We’d love to hear your feedback in the comments.

Content moderation is at the heart of operating a social media platform—removing, promoting, demoting, and labeling content factors significantly into users’ experiences. It’s inevitably controversial, difficult to do accurately and safely, and a focal point for external pressure.

A young man in Manila, Philippines, explains his work as a content moderator for social media platforms in the documentary The Cleaners: “We see the pictures on the screen. You then go through the pictures and delete those that don’t meet the guidelines.” Every day, thousands of largely invisible content moderators around the world do the same, enforcing platforms’ rules on content ranging from child sexual abuse imagery to misinformation. Setting the rules and enforcing them is a difficult and important task, with all sorts of implications, including for free speech, elections, and users’ health.

For major platforms, content moderation is usually an industrial operation. The rules are set by hundreds of lawyers and public affairs executives and implemented by thousands of contractors and a handful of algorithms. Contractors have to meet strict and demanding targets: the young man from the Philippines had a daily quota of 25,000 pictures; Facebook contractors are supposed to maintain an accuracy rate of 95% or higher.

For smaller platforms, content moderation is more of an artisanal operation. It often falls to an administrator, a team of volunteers, a handful of paid staff, or some combination of the three. Sometimes smaller platforms will implement features to lessen the moderation load, like restricting when users can post, or requiring content to be reviewed before it’s posted publicly. Often smaller platforms are easier to moderate because there’s less disagreement about the purpose of the space. They don’t have to be everything for everyone, which is a challenge large platforms struggle with.

Reddit is an interesting exception: it’s a big platform that functions like a bunch of small platforms. Reddit relies on subreddit administrators and moderators, who are mostly volunteers, to handle moderation, though it has a set of baseline rules that all subreddits must follow. Similarly, Nextdoor largely relies on neighborhood volunteers to moderate its network of thousands of local forums.

Regardless of the size or structure of a platform’s moderation operations, the frontline work of content moderators can be disturbing. Another young man in The Cleaners reports seeing “hundreds of beheadings.” Contractors for Facebook developed PTSD and other mental health disorders due the constant exposure to disturbing content. Even volunteer moderators for a subreddit devoted to cute pictures and videos of animals, r/aww, have to wade through “gore porn.” As a result, it’s important that content moderators have access to mental health resources.

Additionally, algorithms, in particular those powered by recent advances in artificial intelligence, can reduce the amount of disturbing content human moderators encounter. However, they are currently mostly limited to large platforms who have the resources and expertise necessary to train and deploy them. Increasingly, startups and civil society are attempting to fill this gap. There is a growing set of services that provide algorithmic moderation to platforms big and small. Regardless, algorithms are often paired with human review and they make mistakes that have to be corrected—humans will always be in the loop at some level. 

Moderation isn’t just difficult because it can be disturbing—it’s also hard to get right. Technology blogger Mike Masnick argues that “[c]ontent moderation at scale is impossible to do well.” He explains, “[I]t will always end up frustrating very large segments of the population and will always fail to accurately represent the ‘proper’ level of moderation [for] anyone.” In particular, platforms are constantly balancing two competing priorities: free speech and public health. Initially, many platforms advertise their commitment to free speech and intention to keep moderation to a minimum. However, over time, events and external pressure force them to expand their content moderation to comply with laws, improve users’ experiences, and satisfy advertisers. (Masnick calls this the “content moderation learning curve.”) 

This balancing act leads to a crisis of legitimacy. Each time a platform expands moderation, they are criticized by groups who feel their free speech rights are being trampled. For example, U.S senator Ted Cruz berated then Twitter CEO Jack Dorsey for his company’s moderation around the 2020 election: “Mr. Dorsey, who the hell elected you and put you in charge of what the media are allowed to report and what the American people are allowed to hear . . .?” At the same time, each time a platform falls short on moderation or pulls back, they are criticized by groups who feel the public health of the user base and society at large is being sacrificed. For example, President Biden in 2021 criticized platforms for allowing misinformation about Covid-19 to spread, accusing them of “killing people.”

The external pressure on platforms moderation practices that results from this crisis of legitimacy takes a number of different forms, including jawboning, regulation, public outcry, and advertising boycotts. Jawboning is when government officials threaten to use their power to compel someone to take actions the government officials cannot take themselves. For example, after Biden’s statement about platforms “killing people,” his communications director threatened changes to platforms’ liability protections if they didn’t take action on misinformation. (Biden couldn’t take action directly because the First Amendment restricts the U.S. government’s ability to censor speech.) Two former members of Facebook’s policy team described jawboning as “incessant.” Governments also take direct action by regulating platforms’ moderation practices. Beyond government pressure, platforms face a constant stream of media stories, activism, and user criticism about their decisions. Sometimes the outcry reaches a level where advertisers feel uncomfortable associating their brands with platforms, leading them to pull advertising and demand changes. For example, major brands pulled advertising from YouTube in 2017, demanding changes to its moderation practices around extremism and hate speech. YouTube eventually rolled out new tools and stricter moderation as a result.

How can platforms overcome the crisis of legitimacy facing their content moderation practices? 

One way is to give external stakeholders more control over moderation decisions. This is sometimes called “community governance.” Mechanisms include advisory boards, democratic processes, and technical federation. Advisory boards are independent bodies that have binding power over aspects of a platform’s content moderation practices. Meta’s Oversight Board is the most prominent example. It is made up of former political leaders, human rights activists, and legal experts from around the world who weigh in on Meta’s content moderation. Meta must implement the Oversight Board’s decisions, unless doing so would violate the law. Less technocratically, democratic processes aim to involve users directly in content moderation. For example, Twitter’s Community Notes initiative allows users to suggest notes adding context on any post. If a note is rated as helpful by enough people from different perspectives, the note is displayed alongside the post. There are technological approaches to community governance as well. Technical federation aims to make social media platforms function more like email, with protocols facilitating choice and independence. Users can choose between different moderation approaches offered by a suite of third-party services.

Another way of addressing the crisis of legitimacy is transparency. By providing more information about their moderation practices, platforms can combat misconceptions and enable researchers, policymakers, and the public to hold them accountable. Regulation may be necessary to close the gap between what platforms are willing to share and what external stakeholders want access to.

More broadly, a shift in the understanding of content moderation, from a focus on individual moderation decisions to aggregate performance and process quality, like whether platforms are meeting standard error rates and implementing best practices, would help take the pressure off mistakes and contested decisions, which are bound to happen. (Even assuming 99.9% accuracy, for large platforms that deal with hundreds of millions of posts a day that means hundreds of thousands of moderation mistakes every day.) Platforms should align their practices with this reality and offer users opportunities to challenge their decisions—procedural justice is a proven way to increase legitimacy.

It’s also worth considering whether the intense focus on platforms’ content moderation practices misses the forest for the trees. A growing number of experts, including former platform executives, argue that how platforms are designed is more important than how they are moderated. Platform design influences what kind of behavior and content is possible and encouraged, and is therefore upstream of moderation decisions that happen on the frontline. Adjusting moderation can only change so much as it responds to conditions created by design.

Content moderation will always be controversial and difficult. However, platforms can take steps to address its inherent challenges by investing in community governance, transparency, resources for human moderators, and innovations that make moderation more efficient and accurate. External stakeholders would do well to recognize the inherent challenges of moderation, particularly at scale, and should strive for a clear-eyed understanding of moderation’s importance relative to other factors, including design and business models, to the issues they care about. 

See Alt-Tech, Business Models, Free Speech, Harassment, Misinformation, Regulation