Difference between revisions of "Content Moderation in Facebook"

From SI410
Jump to: navigation, search
Line 2: Line 2:
  
 
Content moderation is the process of screening content users post online by applying a set of pre-set rules or guidelines to see if it is appropriate or not <ref>Roberts, S. T. (2017). Content moderation. In Encyclopedia of Big Data. UCLA. Retrieved from https://escholarship.org/uc/item/7371c1hf
 
Content moderation is the process of screening content users post online by applying a set of pre-set rules or guidelines to see if it is appropriate or not <ref>Roberts, S. T. (2017). Content moderation. In Encyclopedia of Big Data. UCLA. Retrieved from https://escholarship.org/uc/item/7371c1hf
</ref>. Facebook is a social media platform that allows people to connect with friends, family and communities of people who share common interests <ref>Facebook. Meta. Retrieved from https://about.meta.com/technologies/facebook-app/ </ref>. As with many other popular social media platforms, Facebook has come up with an approach to moderate and control the type of content users see and engage with. Facebook has two main approaches when it comes to content moderation; they utilize both AI moderators and human moderators. The way Facebook moderates its content is through its community standards that lay out what they believe each post should follow <ref name="fb-help-center"> How does facebook use artificial intelligence to moderate content? Facebook Help Center. Retrieved from https://www.facebook.com/help/1584908458516247 </ref>. There have been many instances where Facebook’s content moderation tactics have been a success, but also many instances where it has failed. The ethics behind Facebook’s content moderation approach has also been widely controversial from the mental health struggles human moderators are forced to deal with to questioning how the AI is trained to flag inappropriate content.  
+
</ref>. Facebook is a social media platform that allows people to connect with friends, family and communities of people who share common interests <ref>Facebook. Meta. Retrieved from https://about.meta.com/technologies/facebook-app/ </ref>. Facebook is one of the largest social media platforms as the latest update from the Q3 2022 data reported 2.96 billion users worldwide <ref name="nyu-report"> Facebook: quarterly number of MAU (monthly active users) worldwide 2008-2022. Statista. (2022, October 27). Retrieved from https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/#statisticContainer  </ref>. As with many other popular social media platforms, Facebook has come up with an approach to moderate and control the type of content users see and engage with. Facebook has two main approaches when it comes to content moderation; they utilize both AI moderators and human moderators. The way Facebook moderates its content is through its community standards that lay out what they believe each post should follow <ref name="fb-help-center"> How does facebook use artificial intelligence to moderate content? Facebook Help Center. Retrieved from https://www.facebook.com/help/1584908458516247 </ref>. There have been many instances where Facebook’s content moderation tactics have been a success, but also many instances where it has failed. The ethics behind Facebook’s content moderation approach has also been widely controversial from the mental health struggles human moderators are forced to deal with to questioning how the AI is trained to flag inappropriate content.  
  
 
==Overview/Background==
 
==Overview/Background==
Line 24: Line 24:
 
There are about 15,000 Facebook content moderators employed throughout the world. Their main job is to sort through the AI flagged posts and make decisions about whether or not they violate the company’s guidelines <ref name="verge" />. There are many companies that Facebook has worked with to help moderate content. Some of these companies included Cognizant, Accenture, Arvato, and Genpact <ref name="cnet-article"> </ref>.  
 
There are about 15,000 Facebook content moderators employed throughout the world. Their main job is to sort through the AI flagged posts and make decisions about whether or not they violate the company’s guidelines <ref name="verge" />. There are many companies that Facebook has worked with to help moderate content. Some of these companies included Cognizant, Accenture, Arvato, and Genpact <ref name="cnet-article"> </ref>.  
  
Since Facebook is one of the largest social media sites with a latest update of 2.96 billion users worldwide from the Q3 2022 data <ref> Facebook: quarterly number of MAU (monthly active users) worldwide 2008-2022. Statista. (2022, October 27). Retrieved from https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/#statisticContainer  </ref>, it is often when the human moderators are overwhelmed. Many human moderators have also expressed their struggles with mental health issues as a result of working that job. See ethical issues for more detail
+
Since Facebook is one of the largest social media sites with nearly 3 billion worldwide users <ref name="nyu-report" />, it is often when the human moderators are overwhelmed. Many human moderators have also expressed their struggles with mental health issues as a result of working that job. See ethical issues for more detail
  
 
===How Successful is Facebook’s Content Moderation Process?===
 
===How Successful is Facebook’s Content Moderation Process?===

Revision as of 16:38, 26 January 2023

Workers at a Facebook content moderation center [1]

Content moderation is the process of screening content users post online by applying a set of pre-set rules or guidelines to see if it is appropriate or not [2]. Facebook is a social media platform that allows people to connect with friends, family and communities of people who share common interests [3]. Facebook is one of the largest social media platforms as the latest update from the Q3 2022 data reported 2.96 billion users worldwide [4]. As with many other popular social media platforms, Facebook has come up with an approach to moderate and control the type of content users see and engage with. Facebook has two main approaches when it comes to content moderation; they utilize both AI moderators and human moderators. The way Facebook moderates its content is through its community standards that lay out what they believe each post should follow [5]. There have been many instances where Facebook’s content moderation tactics have been a success, but also many instances where it has failed. The ethics behind Facebook’s content moderation approach has also been widely controversial from the mental health struggles human moderators are forced to deal with to questioning how the AI is trained to flag inappropriate content.

Overview/Background

How content is filtered

When it comes to content moderation, Facebook utilizes both AI moderators as well as human moderators. Posts that violate the community standards are deemed inappropriate. This includes everything from spam to hate speech to content that involves violence [5]. Content that is deemed inappropriate is found through the AI moderators, the human moderators are responsible for posts that the AI is not quite sure about and are vital to improving the machine learning model the AI technology uses [6].

AI Moderators

Facebook filters all posts initially through its AI technology. Facebook’s AI technology starts with building machine learning (ML) models that have the ability to analyze the content of a post or recognize different items in a photo. These models are used to determine whether what the post contains fits within the community standards or if there is a need to take action on the post, such as removing it [5].

Sometimes, the AI technology is unsure if content violates the community standards so it will send over the content to the human review teams. Once the review teams make a decision, the AI technology is able to learn and improve from that decision. This is how technology is trained over time and gets better [5].

Oftentimes, Facebook’s community standard policies change to keep up with changes in social norms, language and our products and services. This requires the content review process to always be evolving and changing [5].

In the past, posts were viewed in the order they were reported, however, Facebook says it wants to make sure the most important posts are seen first. They decided to change their machine learning algorithms to prioritize more severe or harmful posts [6]. Facebook has recently reworked how their deem posts a violation. They used to have separate classification systems that looked at individual parts of a post. They split it up into content type and violation type and would have many classifiers look at photos and text. Facebook decided it was too disconnected and created a new approach. Now, Facebook says their machine learning algorithm works through a holistic approach or Whole Post Integrity Embeddings (WPIE). It was trained on a very widespread selection of violations and has greatly improved [7]

Human Moderators

Facebook filters all posts through its AI technology initially, but it is passed through to their human moderators if Facebook’s AI technology decides that certain pieces of content require further review. It will send the content to human review teams to take a closer look and make a decision on whether or not to remove the post. So, in other words, the human moderators get the final say [5].

There are about 15,000 Facebook content moderators employed throughout the world. Their main job is to sort through the AI flagged posts and make decisions about whether or not they violate the company’s guidelines [6]. There are many companies that Facebook has worked with to help moderate content. Some of these companies included Cognizant, Accenture, Arvato, and Genpact [1].

Since Facebook is one of the largest social media sites with nearly 3 billion worldwide users [4], it is often when the human moderators are overwhelmed. Many human moderators have also expressed their struggles with mental health issues as a result of working that job. See ethical issues for more detail

How Successful is Facebook’s Content Moderation Process?

There are over three million posts reported daily by either users or the AI screening technology as possible content guideline violations. In 2018, Mark Zuckerberg, the CEO of Facebook, stated that moderators “make the wrong call in more than one out of every 10 cases.” That equates to about 300,000 mistakes made everyday [8].


Instances

AI Mistakes

Covid

Backlash around Free Speech

Lawsuits

Language Gaps

Donald Trump

Ethical Concerns

Mental Health

Moderation rules/classifications

LGBTQ community

Conclusion

References

  1. 1.0 1.1 Wong, Q. (2019, June). Facebook content moderation is an ugly business. Here's who does it. CNET. Retrieved from https://www.cnet.com/tech/mobile/facebook-content-moderation-is-an-ugly-business-heres-who-does-it/
  2. Roberts, S. T. (2017). Content moderation. In Encyclopedia of Big Data. UCLA. Retrieved from https://escholarship.org/uc/item/7371c1hf
  3. Facebook. Meta. Retrieved from https://about.meta.com/technologies/facebook-app/
  4. 4.0 4.1 Facebook: quarterly number of MAU (monthly active users) worldwide 2008-2022. Statista. (2022, October 27). Retrieved from https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/#statisticContainer
  5. 5.0 5.1 5.2 5.3 5.4 5.5 How does facebook use artificial intelligence to moderate content? Facebook Help Center. Retrieved from https://www.facebook.com/help/1584908458516247
  6. 6.0 6.1 6.2 Vincent, J. (2020, November 13). Facebook is now using AI to sort content for quicker moderation. The Verge. Retrieved from https://www.theverge.com/2020/11/13/21562596/facebook-ai-moderation
  7. New progress in using AI to detect harmful content. Meta AI. Retrieved from https://ai.facebook.com/blog/community-standards-report/
  8. Barrett , P. M. (2020). (rep.). Who Moderates the Social Media Giants? A Call to End Outsourcing. NYU Stern Center for Business and Human Rights