Content Moderation in Facebook

From SI410
Revision as of 20:35, 25 January 2023 by Ebethwu (Talk | contribs)

Jump to: navigation, search
Workers at a Facebook content moderation center [1]

Content moderation is the process of screening content users post online by applying a set of pre-set rules or guidelines to see if it is appropriate or not [2]. Facebook is a social media platform that allows people to connect with friends, family and communities of people who share common interests [3]. As with many other popular social media platforms, Facebook has come up with an approach to moderate and control the type of content users see and engage with. Facebook has two main approaches when it comes to content moderation; they utilize both AI moderators and human moderators. The way Facebook moderates its content is through its community standards that lay out what they believe each post should follow [4]. There have been many instances where Facebook’s content moderation tactics have been a success, but also many instances where it has failed. The ethics behind Facebook’s content moderation approach has also been widely controversial from the mental health struggles human moderators are forced to deal with to questioning how the AI is trained to flag inappropriate content.

Overview/Background

How content is filtered

When it comes to content moderation, Facebook utilizes both AI moderators as well as human moderators. While a majority of content that is deemed inappropriate is found through the AI moderators, the human moderators are responsible for posts that the AI is not quite sure about and are vital to improving the machine learning model the AI technology uses [4].

AI Moderators

Facebook filters all posts initially through its AI technology. Facebook’s AI technology starts with building machine learning (ML) models that have the ability to analyze the content of a post or recognize different items in a photo. These models are used to determine whether what the post contains fits within the community guidelines or if there is a need to take action on the post, such as removing it [4].

Sometimes, the AI technology is unsure if content violates the community guidelines so it will send over the content to the human review teams. Once the review teams make a decision, the AI technology is able to learn and improve from that decision. This is how technology is trained over time and gets better [4].

Oftentimes, Facebook’s community guideline policies change to keep up with changes in social norms, language and our products and services. This requires the content review process to always be evolving and changing [4].

Human Moderators

Individual Moderation

Instances

Covid

Backlash around Free Speech

Lawsuits

Language Gaps

Ethical Concerns

Mental Health

Conclusion

References

  1. Wong, Q. (2019, June). Facebook content moderation is an ugly business. Here's who does it. CNET. Retrieved from https://www.cnet.com/tech/mobile/facebook-content-moderation-is-an-ugly-business-heres-who-does-it/
  2. Roberts, S. T. (2017). Content moderation. In Encyclopedia of Big Data. UCLA. Retrieved from https://escholarship.org/uc/item/7371c1hf
  3. Facebook. Meta. Retrieved from https://about.meta.com/technologies/facebook-app/
  4. 4.0 4.1 4.2 4.3 4.4 How does facebook use artificial intelligence to moderate content? Facebook Help Center. Retrieved from https://www.facebook.com/help/1584908458516247