Difference between revisions of "Fact Checking"

From SI410
Jump to: navigation, search
(Misinformation Policies)
Line 17: Line 17:
 
Establishing a political misinformation policy means that platforms acknowledge disinformation when it appears and potentially enforce penalties for content/users that frequently violate their guidelines.
 
Establishing a political misinformation policy means that platforms acknowledge disinformation when it appears and potentially enforce penalties for content/users that frequently violate their guidelines.
 
*Facebook has partnered with a third party fact check platform to implement a remove/reduce/inform policy. This means that ad content that is flagged as false is either covered with a disclaimer and followed with a fact-check. Its reach (the total number of people who see the content) is also reduced using their algorithms. Users who attempt to share or post fact-checked information are prompted with a notification informing them the information might not be correct. The scale of Meta’s platforms is currently too large for the remove/reduce/inform policy. Elected officials and certain Facebook-approved candidates are exempt from this policy and have been linked to some of the greatest sources of misinformation on the site.
 
*Facebook has partnered with a third party fact check platform to implement a remove/reduce/inform policy. This means that ad content that is flagged as false is either covered with a disclaimer and followed with a fact-check. Its reach (the total number of people who see the content) is also reduced using their algorithms. Users who attempt to share or post fact-checked information are prompted with a notification informing them the information might not be correct. The scale of Meta’s platforms is currently too large for the remove/reduce/inform policy. Elected officials and certain Facebook-approved candidates are exempt from this policy and have been linked to some of the greatest sources of misinformation on the site.
*Twitter has established a fact-checking policy that applies to information that could alter confidence in democratic elections, but does not have a widespread fact-checking program.
+
*Twitter has established a fact-checking policy that applies to information that could alter confidence in democratic elections, but does not have a widespread fact-checking program. They are currently working on launching a program that will ask users who try to retweet articles they have not opened and read if they are sure they want to retweet it <ref>Barbaro, Michael, host. “Jack Dorsey on Twitter’s Mistakes.” The Daily, NYT, 7 Aug. 2020.<ref/>
 
*YouTube also has a fact-checking policy against information that could alter confidence in democratic elections. They have also introduced fact-check panels to debunk popular false claims (i.e. conspiracy theories). However, this policy overall has not been enforced.
 
*YouTube also has a fact-checking policy against information that could alter confidence in democratic elections. They have also introduced fact-check panels to debunk popular false claims (i.e. conspiracy theories). However, this policy overall has not been enforced.
  

Revision as of 17:56, 26 January 2022

Back • ↑Topics • ↑Categories

Fact checking is the process of investigating an issue or information in order to verify the facts. The process of fact-checking has become more and more relevant in the discussion of the roles that major social media companies have as news sources. According to a survey in 2018, over two-thirds of Americans get some of their news from social media. They reported that 43% of Americans get their news from Facebook, 21% from YouTube, and 12% from Twitter [1]. While the process of fact-checking the information you are exposed to has always been a part of the discussion (i.e. using reliable sources, researching the author(s), etc.) there is now discussion surrounding who is responsible for fact checking information online? Is it the responsibility of the user to ensure their information comes from a reliable source or is is the responsibility of platforms that share this information (i.e. Facebook, Twitter, Google etc.)? Fact-checking is important because exposure to misinformation can greatly influence people’s opinion and in turn their actions.

There are different types of information on the internet that might be fact-checked. Misinformation is incorrect or misleading information[2]. Misinformation is unique in the regards that the information is spread by people who don’t know the information is false. An example of misinformation is “fake news”. Many people share fake news without knowing that it is fake, but they share it because it catches their attention and is interesting. Disinformation is the spreading of false information knowing that it is false.

Responsibility of Platforms

Third parties such as major social media platforms and government officials are facing pressures to combat the spread of fake news and misinformation. User’s have generally been faced with the responsibility to fact-check their news and evaluate it for truth up until the last couple years. Below is a cross analysis of different third parties policies regarding fact-checking.

Facebook is an online social media and social networking service founded in 2004 by Mark Zuckerberg who is still their CEO. Facebook is a part of the Technology Conglomerate known as Meta which includes Instagram and Whatsapp. The policies outlined below apply to all Meta owned applications but Facebook is at the head of the discussion of fact-checking information.

Twitter is a social networking and microblogging platform founded in 2006 by Jack Dorsey. Their CEO is currently Parag Agrawal.

YouTube is an online video sharing and social media platform owned by Google. It was founded in 2005. Its CEO is Susan Wojcicki.

Fact-Check Policies

Misinformation Policies

Establishing a political misinformation policy means that platforms acknowledge disinformation when it appears and potentially enforce penalties for content/users that frequently violate their guidelines.

  • Facebook has partnered with a third party fact check platform to implement a remove/reduce/inform policy. This means that ad content that is flagged as false is either covered with a disclaimer and followed with a fact-check. Its reach (the total number of people who see the content) is also reduced using their algorithms. Users who attempt to share or post fact-checked information are prompted with a notification informing them the information might not be correct. The scale of Meta’s platforms is currently too large for the remove/reduce/inform policy. Elected officials and certain Facebook-approved candidates are exempt from this policy and have been linked to some of the greatest sources of misinformation on the site.
  • Twitter has established a fact-checking policy that applies to information that could alter confidence in democratic elections, but does not have a widespread fact-checking program. They are currently working on launching a program that will ask users who try to retweet articles they have not opened and read if they are sure they want to retweet it Cite error: Closing </ref> missing for <ref> tag
  • Comparative Social Media Policy Analysis. Democrats, 27 Aug. 2021.
  • “Fake News, Misinformation, & Fact-Checking: Ohio University MPA.” Ohio University, 17 Oct. 2019