Content Moderation in Facebook

From SI410
Revision as of 16:53, 9 February 2023 by Ebethwu (Talk | contribs)

Jump to: navigation, search
Workers at a Facebook content moderation center [1]

Content moderation is the process of screening content users post online by applying a set of pre-set rules or guidelines to see if it is appropriate or not [2]. Facebook is a social media platform that allows people to connect with friends, family and communities of people who share common interests [3]. As with many other popular social media platforms, Facebook has come up with an approach to moderate and control the type of content users see and engage with. Facebook has two main approaches when it comes to content moderation; they utilize both AI moderators and human moderators. The way Facebook moderates its content is through its community standards that lay out what they believe each post should follow [4]. Although most of the time Facebook’s content moderation tactics are successful in reducing the harmful content on its platform, people mainly focus on the instances where it has failed [5]. The ethics behind Facebook’s content moderation approach has also been widely controversial from the mental health struggles human moderators are forced to deal with [6] to questioning how the AI is trained to flag inappropriate content [7].

Overview/Background

How content is filtered [8]

When it comes to content moderation, Facebook utilizes both AI moderators as well as human moderators. Posts that violate the community standards are deemed inappropriate. This includes everything from spam to hate speech to content that involves violence [4]. Content that violates the community standards is found right away through the AI moderators, however, the human moderators are responsible for posts that the AI is not quite sure about and are vital to improving the machine learning model the AI technology uses [9].

AI Moderators

Facebook filters all posts initially through its AI technology. Facebook’s AI technology starts with building machine learning (ML) models that have the ability to analyze the content of a post or recognize different items in a photo. These models are used to determine whether what the post contains fits within the community standards or if there is a need to take action on the post, such as removing it [4].

Sometimes, the AI technology is unsure if content violates the community standards so it will send over the content to the human review teams. Once the review teams make a decision, the AI technology is able to learn and improve from that decision. This is how technology is trained over time and gets better. Oftentimes, Facebook’s community standard policies change to keep up with changes in social norms, language and our products and services. This requires the content review process to always be evolving and changing [4].

Until fairly recently, posts were viewed in the order they were reported, however, Facebook says it wants to make sure the most important posts are seen first. They decided to change their machine learning algorithms to prioritize more severe or harmful posts [9]. Facebook has recently reworked how their deem posts a violation. They used to have separate classification systems that looked at individual parts of a post. They split it up into content type and violation type and would have many classifiers look at photos and text. Facebook decided it was too disconnected and created a new approach. Now, Facebook says their machine learning algorithm works through a holistic approach or Whole Post Integrity Embeddings (WPIE). It was trained on a very widespread selection of violations and has greatly improved [10]

Human Moderators

Facebook filters all posts through its AI technology initially, but it is passed through to their human moderators if Facebook’s AI technology decides that certain pieces of content require further review. It will send the content to human review teams to take a closer look and make a decision on whether or not to remove the post. So, in other words, the human moderators get the final say [4].

There are about 15,000 Facebook content moderators employed throughout the world. Their main job is to sort through the AI flagged posts and make decisions about whether or not they violate the company’s guidelines [9]. There are many companies that Facebook has worked with to help moderate content. Some of these companies included Cognizant, Accenture, Arvato, and Genpact [1].

Facebook is one of the largest social media platforms as the latest update from the Q3 2022 data reported 2.96 billion users worldwide [11]. Since Facebook must cater to so many worldwide users, it is often when the human moderators are overwhelmed. Many human moderators have also expressed their struggles with mental health issues as a result of working that job. Many employees have reported the work enviornment being stressful, dirty and unhealthy [1].

Human moderators receive extremely low pay in comparison to Facebook's other employees. Human moderators at Cognizant, a company Facebook utilizes to provide moderators, earn just about $4 above the state's minimum wage. For example, employees of Cognizant who work in Pheonix, Arizona make only $28,800 per year. Whereas, the average Facebook employee has a total compensation of $240,000 [6]. Many content moderators have expressed frustration with the low pay [12].

Self Moderation

Facebook additionally provides page managers tools with which to moderate their own pages, at a basic level these include what types of posts users can upload (photos, text, comments, nothing). These permissions extend to banning users, hiding or deleting comments and posts, and blocking profanity or specific vocabulary from text posts. These posts may not violate the community standards Facebook has created, but for whatever reason, the user does not want to see. Users are able to block certain words from being seen or block entire users entirely. While this option could be dangerous and create a biased or narrow-minded view of something, allowing the opportunity for users to moderate the content they see is important to Facebook. Facebook also has a "Moderation Assist" feature that uses AI to censor individual accounts or posts based on features of the account or post, including not having a profile picture, not having friends or followers, posts including links, or posts with custom keywords [13].

How Successful is Facebook’s Content Moderation Process?

There are over three million posts reported daily by either users or the AI screening technology as possible content guideline violations. In 2018, Mark Zuckerberg, the CEO of Facebook, stated that moderators “make the wrong call in more than one out of every 10 cases.” That equates to about 300,000 mistakes made everyday [5].

AI Mistakes

Facebook has faced criticism when it comes to their content moderation process. Specifically, the automated removal systems and the vague rules and unclear explanations of its decisions. Some reasons why the content moderation AI faces backlash is due to unreliable algorithms, vague standards/guidelines, not being able to see more context, and proportionality [7].

However, it is important to note that Facebook is constantly changing its guidelines to stay with the times. It is hard for the algorithm to change and update at the same pace [4].

Instances

Covid-19 (The Good & The Bad)

When the Covid-19 pandemic hit in March 2020, Facebook decided to send it's human moderators home and only rely on AI. As a result, Facebook saw a decrease in removal of posts when it came to child sexual abuse material and suicide and self-injury content. The drop in removal was not due to a decrease in those types of posts, but instead due to the limited number of actual humans who were available to look at those posts. Facebook also stated that they did not feel comfortable providing people with extremely graphic content at home [14] .

Humans are not responsible to find all all child sexual abuse material. Automated systems are responsible for removing 97.5% of those types of posts that appear on Facebook. But if the human moderators aren't able to flag those types of posts, the AI system is unable to find it at scale. The Covid-19 Pandemic showed Facebook how vital and needed human moderators are in making sure their content moderation is a success [14].

Even though Facebook had access to a lesser amount of human moderators due to Covid-19, Facebook noted that it was able to improve in other areas through its AI technology. Some examples include improving its proactive detection rate for the moderation of hate speech, terrorism, and bullying and harassment content. [15]

Backlash around Free Speech

Lawsuits

Language Gaps

Ethical Concerns

Mental Health & Unfair Treatment of Human Moderators

One of the biggest ethical concerns when it comes to Facebook’s content moderation process is the human moderators. The human moderators are reported to have unfair wages, unfair hours, and are left to deal with many mental health issues [5].

Former moderators who worked for Facebook have come out to say how they have struggled with PTSD and trauma from some of the content they were forced to engage with [16]. One former moderator, Josh Sklar, has stated how Facebook was always changing up the policies and guidelines and never really gave specific quota. In other words, he saw things as extremely mismanaged. He also expressed great frustration when it came to the type of content he was reviewing (such as child porn). He cited that as his main reason for quitting and hopes in the future, Facebook will have more emphasis on mental health for these moderators. [17].

Facebook uses content moderation provider companies from around the world and one of its largest has recently decided to discontinue working with Facebook. A TIME investigation found low pay, trauma and alleged union-busting surrounding the company [12].

Moderation rules/classifications

LGBTQ community

Conclusion

References

  1. 1.0 1.1 1.2 Wong, Q. (2019, June). Facebook content moderation is an ugly business. Here's who does it. CNET. Retrieved from https://www.cnet.com/tech/mobile/facebook-content-moderation-is-an-ugly-business-heres-who-does-it/
  2. Roberts, S. T. (2017). Content moderation. In Encyclopedia of Big Data. UCLA. Retrieved from https://escholarship.org/uc/item/7371c1hf
  3. Facebook. Meta. Retrieved from https://about.meta.com/technologies/facebook-app/
  4. 4.0 4.1 4.2 4.3 4.4 4.5 How does facebook use artificial intelligence to moderate content? Facebook Help Center. Retrieved from https://www.facebook.com/help/1584908458516247
  5. 5.0 5.1 5.2 Barrett , P. M. (2020). (rep.). Who Moderates the Social Media Giants? A Call to End Outsourcing. NYU Stern Center for Business and Human Rights
  6. 6.0 6.1 Simon, S & Bowman, E. (2019, March 2). Propaganda, hate speech, violence: The working lives of Facebook's content moderators. NPR. Retrieved from https://www.npr.org/2019/03/02/699663284/the-working-lives-of-facebooks-content-moderators
  7. 7.0 7.1 Hecht-Felella, L. & Patel, F. (2022, November 18). Facebook's content moderation rules are a mess. Brennan Center for Justice. Retrieved from https://www.brennancenter.org/our-work/analysis-opinion/facebooks-content-moderation-rules-are-mess
  8. name="fb-help-center"
  9. 9.0 9.1 9.2 Vincent, J. (2020, November 13). Facebook is now using AI to sort content for quicker moderation. The Verge. Retrieved from https://www.theverge.com/2020/11/13/21562596/facebook-ai-moderation
  10. New progress in using AI to detect harmful content. Meta AI. Retrieved from https://ai.facebook.com/blog/community-standards-report/
  11. Facebook: quarterly number of MAU (monthly active users) worldwide 2008-2022. Statista. (2022, October 27). Retrieved from https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/#statisticContainer
  12. 12.0 12.1 Perrigo, B. (2023, January 10). Facebook's partner in Africa Sama Quits content moderation. Time. Retrieved from https://time.com/6246018/facebook-sama-quits-content-moderation/
  13. Meta Business Help Center: Moderation. Meta. Retrieved from https://www.facebook.com/business/help/1323914937703529
  14. 14.0 14.1 Lapowsky, I. (2020, August 12). How covid-19 helped - and hurt - facebook's fight against bad content. Protocol. Retrieved from https://www.protocol.com/covid-facebook-content-moderation
  15. Rodriguez, S. (2020, August 11). Covid-19 slowed Facebook's moderation for suicide, self-injury and child exploitation content. CNBC. Retrieved from https://www.cnbc.com/2020/08/11/facebooks-content-moderation-was-impacted-by-covid-19.html
  16. Sklar, J. & Silverman, J. (2023, January 26). I was a Facebook content moderator. I quit in disgust. The New Republic. Retrieved from https://newrepublic.com/article/162379/facebook-content-moderation-josh-sklar-speech-censorship
  17. Newton, C. (2019, February 25). The trauma floor: The secret lives of Facebook moderators in America. The Verge. Retrieved from https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona