Content moderation in Reddit

From SI410
Jump to: navigation, search
Reddit's moderation bot in action.

Content moderation in Reddit is unique in the social media industry for its approach to content moderation. Reddit is governed using a model similar to democracy where users have responsibility for the content posted and deciding what is and isn't acceptable. The first layer of rules is the specific rules for an individual community known as a subreddit. They are run by volunteer moderators who write and enforce rules customized for each subreddit. Rules on subreddits are wholly enforced by the moderators without direction or commands from Reddit. The second layer of rules is Reddit's Content Policy rules which are enforced both by subreddit moderators as well as Reddit administrators. These site-wide rules include bans on harassment, involuntary porn, promoting violence, sharing personal information or doxing [1] and other behaviors deemed unacceptable for Reddit. Ethical issues include freedom of speech, bots, bias from algorithms, moderators, and administrators, effects on volunteer moderators, decentralized model of moderation, lack of moderation by moderators and administrators, and abuse by moderators and administrators.


Pie Chart of Removed Content

Unlike other social media websites, Reddit uses a highly decentralized model of moderation which, according to their official Transparency report, "is the most scalable solution we’ve seen to the challenges of moderating content online."[2] Moderators for subreddits have the freedom to enforce their subreddit rules as they see fit, but they must enforce Reddit's content policy. This model of moderations can be understood in 5 parts:

  • Auto-Moderator[3] is a bot built into Reddit for moderators to use to enforce rules that are tedious or repetitive for moderators themselves to do as well as some recurring community discussion posts. It has the ability to flag, detect, and/or remove rule-breaking or inappropriate content on a subreddit. Auto-moderator or other similar bots are often the first line of moderation and help to keep subreddits free of spam and most obvious rule-breaking content. About 2,767,257,085 pieces of content are removed from Reddit by bots. [4]
  • User Reports are the action of a user of Reddit who uses the report feature of Reddit to bring possible rule-breaking comments or content to the attention of moderators to review and take action if necessary. These reports are anonymous. If inappropriate content does get past auto-moderator, a lack of user reports may see inappropriate content remain visible on a subreddit for an unknown amount of time until a report is filed or a moderator sees it and removes it. [5]
  • Moderator Action is a moderator or moderation team taking moderation action to remove content considered rule-breaking for their subreddit and issue suspensions or bans if necessary. This process can be slow as reports are in a queue to be reviewed and moderators are online at all times to check the queue. This type of moderation is the vast majority of moderation action done on Reddit with 84,140,588 instances of content removal. [6]
  • Administrator Action is the Reddit administrators taking moderation action over content considered to be in violation of the site-web content moderation policy or against subreddit moderators who have not been enforcing it. This type of moderation is the smaller portion of moderation actions with 222,309 instances of content removal. Reddit administrators also have the ability to quarantine a community to prevent its content from appearing anywhere but on the subreddit itself.
  • No Moderation is lack of any moderation whatsoever. This occurs most often on abandoned subreddits where moderators are inactive or moderators fail to moderate their subreddit according to the site-wide Content Policy.

Notable Instances of Moderation in Reddit

Since its founding 2005, Reddit has had several notable instances of Content moderation, often reactionary to the current climate, events, or incidents.

May 2011: Posting any personal information is prohibited. [7]

October 2011: r/jailbait is banned after a CNN report about whether it is or is not child porn. [8]

April-May 2013: In the aftermath of the Boston Bombing, several Reddit users attempt to identify the bomber, which results in the death of Sunil Tripathi by suicide as a result of the witchhunt. Reddit later apologized and promised to use this instance to prevent this from occurring again and more sensitive to the rule on personal information. [9]

October 2013: r/politics ban several websites to raise the quality of content on the subreddit. [10]

September 2014: Reddit affirms freedom of speech after their banning of r/thefappening, a subreddit featuring iCloud leaks of celebrity nudes. [11]

June 2015: Reddit bans five subreddits under their harassment policy, including r/fatpeoplehate, r/hamplanethatred, r/neofag, r/transfags, and r/shitniggerssay. [12]

November 2016: r/pizzaGate, a spin-off of r/The_Donald to discuss the Pizza Gate conspiracy is banned for doxxing. [13]

June 2019: r/The_Donald is quarantined for promoting violence against police and other public officials. [14]

February 2020: Reddit administrators remove several mods from r/The_Donald for failures to uphold its Content policy. [15]

Ethical Issues

The ethical issues involving Content moderation in Reddit, in general, revolve around the actions or lack thereof by moderations. Bots are used to help with moderation, but the majority of moderation is subjective and thus full of ethical issues.

Freedom of Speech

The history of this topic on Reddit has a long history. In 2011, Reddit general manager Erik Martin stated, "We're a free speech site with very few exceptions (mostly personal info) and having to stomach occasional troll Reddit like picsofdeadkids or morally questionable reddits like jailbait is part of the price of free speech on a site like this."[16] In 2015, then CEO Ellen Pao claimed, "It's not our site's goal to be a completely free-speech platform" [17]. This contradiction and the decentralized model of Reddit has allowed for freedom of speech to remain so as long as it doesn't cross Reddit's Content Policy or bring attention to Reddit for the wrong reason― r/pizzagate, r/jailbait, r/fatpeoplehate, and other similar subreddits.


  • Algorithms

Algorithms can be incorrectly programmed, intentionally or not, and cause issues that may not be apparent to moderators until either they or their communities notice an issue with them. In a similar sense, moderator bots such as Auto-Moderator can have more bias built into as a subreddit customizes it or their bot to fulfill their needs of basic moderation. The inability for bots to understand the context for content can also lead to errors that would not have been done by human moderators.

  • Moderators

Moderators are said to be impartial in all matters, but their actions as moderators can be heavily influenced by their personal biasses often unknowingly. Moderators can also easily abuse their powers they rarely have any supervision from Reddit administrators unless they are failing to uphold site-wide rules. Reddit's decentralized moderation system works until it doesn't. Without checks and balances for moderators, they can moderate as they see fit, but on the other hand, Reddit prides itself on the same model that allows for moderators to abuse their powers. Moreover, the removal of a subreddit moderator requires the moderation team to remove the abusive moderator or a complete user revolt, which is extremely difficult to achieve as moderatos have the power to ban those revolting.

Effects on Volunteer Moderators

The psychological effects of reviewing or reading rule-breaking cannot be understated. Unlike most websites, users know who banned them and Reddit. This allows banned users to harass moderations over their actions and make new accounts to keep up the harassment. A subreddit moderator known as /u/lolihull has been subject to not only death threats but also rape threats if the user she banned learns she is female. These words may just be posted by an anonymous user, but as /u/gallowboob notes, "Even if you're having the best day, reading a message like 'I'm going to kill you' or 'walk in front of a bus,' is shitty."[18] Reddit has only promised to continue to review and update their policies, while the volunteer moderators who make Reddit are left alone without much support. Moreover, it is simple for a user to avoid a ban by making a new account or changing IPs to keep harassing a moderator.

Lack of Moderation

Some subreddits have moderators using a hands-off approach when it comes to controlling their subreddits. Many subreddits have been quarantined in part because of this but also because moderators have failed to uphold site-wide rules. Many of the more controversial subreddits have only been removed after pressure from the media, which raises the question of whether should they have been banned earlier instead of after bad press.

See Also


  16. [1]