Fact Checking

From SI410
Revision as of 18:00, 26 January 2022 by Vstahl (Talk | contribs) (Responsibility of Platforms)

Jump to: navigation, search
Back • ↑Topics • ↑Categories

Fact checking is the process of investigating an issue or information in order to verify the facts. The process of fact-checking has become more and more relevant in the discussion of the roles that major social media companies have as news sources. According to a survey in 2018, over two-thirds of Americans get some of their news from social media. They reported that 43% of Americans get their news from Facebook, 21% from YouTube, and 12% from Twitter [1]. While the process of fact-checking the information you are exposed to has always been a part of the discussion (i.e. using reliable sources, researching the author(s), etc.) there is now discussion surrounding who is responsible for fact checking information online? Is it the responsibility of the user to ensure their information comes from a reliable source or is is the responsibility of platforms that share this information (i.e. Facebook, Twitter, Google etc.)? Fact-checking is important because exposure to misinformation can greatly influence people’s opinion and in turn their actions.

There are different types of information on the internet that might be fact-checked. Misinformation is incorrect or misleading information[2]. Misinformation is unique in the regards that the information is spread by people who don’t know the information is false. An example of misinformation is “fake news”. Many people share fake news without knowing that it is fake, but they share it because it catches their attention and is interesting. Disinformation is the spreading of false information knowing that it is false[3].

Responsibility of Platforms

Third parties such as major social media platforms and government officials are facing pressures to combat the spread of fake news and misinformation. The United States of America both aims to protect the 1st Amendment of Free Speech, but also ensure their people have access to correct information. Until recently, user’s have generally been faced with the responsibility to fact-check their news and evaluate it for truth up until the last couple years. Below is a cross analysis of different third parties policies regarding fact-checking.

Facebook is an online social media and social networking service founded in 2004 by Mark Zuckerberg who is still their CEO. Facebook is a part of the Technology Conglomerate known as Meta which includes Instagram and Whatsapp. The policies outlined below apply to all Meta owned applications but Facebook is at the head of the discussion of fact-checking information.

Twitter is a social networking and microblogging platform founded in 2006 by Jack Dorsey. Their CEO is currently Parag Agrawal. 80% of Twitter's user base lies outside the U.S. [4]

YouTube is an online video sharing and social media platform owned by Google. It was founded in 2005. Its CEO is Susan Wojcicki.

Fact-Check Policies

Misinformation Policies

Establishing a political misinformation policy means that platforms acknowledge disinformation when it appears and potentially enforce penalties for content/users that frequently violate their guidelines.

  • Facebook has partnered with a third party fact check platform to implement a remove/reduce/inform policy. This means that ad content that is flagged as false is either covered with a disclaimer and followed with a fact-check. Its reach (the total number of people who see the content) is also reduced using their algorithms. Users who attempt to share or post fact-checked information are prompted with a notification informing them the information might not be correct. The scale of Meta’s platforms is currently too large for the remove/reduce/inform policy. Elected officials and certain Facebook-approved candidates are exempt from this policy and have been linked to some of the greatest sources of misinformation on the site.
  • Twitter has established a fact-checking policy that applies to information that could alter confidence in democratic elections, but does not have a widespread fact-checking program. They are currently working on launching a program that will ask users who try to retweet articles they have not opened and read if they are sure they want to retweet it [5]
  • YouTube also has a fact-checking policy against information that could alter confidence in democratic elections. They have also introduced fact-check panels to debunk popular false claims (i.e. conspiracy theories). However, this policy overall has not been enforced.

Hate Speech

Enforcing guidelines on hate speech and removing content that is intended to intimidate or dehumanize individuals or groups reduced real-world violence. All major social media companies have policies against hate speech but enforcement and accuracy against hate speech is inconsistent across platforms. Algorithms generally do not treat minority groups the same as those in the majority.

  • After the 2016 election incident, Facebook has implemented and improved on hate speech policies. Removing white nationalism, militarized social movements, and Holocaust denial from their platform has been a major goal for Facebook. However as seen by the January 6 2021 Capitol insurrection, which was largely planned on Facebook, these policies have not been enforced well.
  • Twitter adapted its hate speech policy to include tweets that target individuals or groups based on age, disability, disease, race, or ethnicity. However, as seen with Facebook these policies are not well enforced.
  • YouTube removed channels run by hate group leaders, but their recommendation algorithm still suggests racist and hateful videos. Studies have shown that YouTube facilitates right-wing radicalization by encouraging users to “go down a rabbit hole” (easy to move from mainstream to extreme content in one sitting) with their recommendation algorithms.

Anonymous Accounts

Reducing anonymous accounts on platforms reduces the ability of foreign/bad actors. This can come in the form of requiring users to confirm their identity.

  • Facebook requires authenticity for accounts and flags accounts they detect as fake.
  • Twitter does not require accounts to represent humans.
  • YouTube does not require accounts to be authentic. Google however, considers anonymous accounts to be low quality and they receive a low pagerank.

Recommendation Algorithms

Content recommendation algorithms are used to increase screen time. Making these algorithms transparent to users puts the responsibility onto the users a bit more.

  • Facebook has provided very little transparency into the algorithms behind their “NewsFeed”.
  • Twitter has provided very little transparency into the algorithms behind their top search results, ranked tweets, and who to follow.
  • Google’s ranking algorithm has been greatly documented. YouTube however has provided very little transparency into their recommendation algorithms.

Hacked Content

All platforms have very strict policies banning publication of hacked materials on their platform. Hacked materials are considered stolen, forged, or manipulated documents.

References

  1. Comparative Social Media Policy Analysis. Democrats, 27 Aug. 2021.
  2. “Fake News, Misinformation, & Fact-Checking: Ohio University MPA.” Ohio University, 17 Oct. 2019
  3. “Fake News, Misinformation, & Fact-Checking: Ohio University MPA.” Ohio University, 17 Oct. 2019
  4. Barbaro, Michael, host. “Jack Dorsey on Twitter’s Mistakes.” The Daily, NYT, 7 Aug. 2020.
  5. Barbaro, Michael, host. “Jack Dorsey on Twitter’s Mistakes.” The Daily, NYT, 7 Aug. 2020.