Difference between revisions of "Fact Checking"

From SI410
Jump to: navigation, search
(Rise of Fact-Checking)
(2016 US Presidential Election)
Line 9: Line 9:
  
 
===2016 US Presidential Election===
 
===2016 US Presidential Election===
The term "fake news" became exponentially popular during the 2016 US Presidential Election. After the election, and Donald Trump's victory, there was much debate about the impact social media as news platforms and the spreading of "fake news" and misinformation had on the results of the election. Research following the election concluded that fake new stories were shared more frequently than mainstream new stories, many people who are exposed to fake news report their believed it, and that most fake news stories favored Donald Trump over his competitor Hillary Clinton<ref>https://web.stanford.edu/~gentzkow/research/fakenews.pdf</ref>. These findings drove leaders in tech, government officials, and users of these platforms to look to create a solution before the 2020 Presidential election.
+
The term "fake news" became exponentially popular during the [https://en.wikipedia.org/wiki/2016_United_States_presidential_election 2016 US Presidential Election]. After the election, and Donald Trump's victory, there was much debate about the impact social media as news platforms and the spreading of "fake news" and misinformation had on the results of the election. Research following the election concluded that fake new stories were shared more frequently than mainstream new stories, many people who are exposed to fake news report their believed it, and that most fake news stories favored Donald Trump over his competitor Hillary Clinton<ref>https://web.stanford.edu/~gentzkow/research/fakenews.pdf</ref>. These findings drove leaders in tech, government officials, and users of these platforms to look to create a solution before the 2020 Presidential election.
  
 
== Responsibility of Platforms ==
 
== Responsibility of Platforms ==

Revision as of 15:16, 27 January 2022

Back • ↑Topics • ↑Categories

Fact checking is the process of investigating an issue or information in order to verify the facts. The process of fact-checking has become more and more relevant in the discussion of the roles that major social media companies have as news sources. According to a survey in 2018, over two-thirds of Americans get some of their news from social media. They reported that 43% of Americans get their news from Facebook, 21% from YouTube, and 12% from Twitter [1]. While the process of fact-checking the information you are exposed to has always been a part of the discussion (i.e. using reliable sources, researching the author(s), etc.) there is now discussion surrounding who is responsible for fact checking information online? Is it the responsibility of the user to ensure their information comes from a reliable source or is is the responsibility of platforms that share this information (i.e. Facebook, Twitter, Google etc.)? Fact-checking is important because exposure to misinformation can greatly influence people’s opinion and in turn their actions.

There are different types of information on the internet that might be fact-checked. Misinformation is incorrect or misleading information[2]. Misinformation is unique in the regards that the information is spread by people who don’t know the information is false. An example of misinformation is “fake news”. Many people share fake news without knowing that it is fake, but they share it because it catches their attention and is interesting. Disinformation is the spreading of false information knowing that it is false[3].

History

Rise of Fact-Checking

The rise of the internet in the early 2000's allowed people to get their information from a wide variety of sources. People no longer had to rely on their thoroughly edited newspapers or radios to get their news. The growth of online platforms created many concerns such as a concentration of power in the hands of a few large media companies, and the growing likeliness for user's to get trapped in echo chambers or filter bubbles. An echo chamber or filter bubble occurs when people get isolated from information opposing their own perspectives and viewpoints. People tend to look for information that confirms their current beliefs, not information that challenges it. This means that people either needed to fact-check the information they are exposed to for accuracy or that the distributors of the information(i.e. social media platforms) are responsible for monitoring the accuracy of information posted on their sites or notifying users that it might not be correct. Fact-checking becomes an especially prevalent topic of conversation surrounding elections and political events such as the 2016 and 2020 US Presidential election and the January 6th, 2021 Capitol Insurrection.

2016 US Presidential Election

The term "fake news" became exponentially popular during the 2016 US Presidential Election. After the election, and Donald Trump's victory, there was much debate about the impact social media as news platforms and the spreading of "fake news" and misinformation had on the results of the election. Research following the election concluded that fake new stories were shared more frequently than mainstream new stories, many people who are exposed to fake news report their believed it, and that most fake news stories favored Donald Trump over his competitor Hillary Clinton[4]. These findings drove leaders in tech, government officials, and users of these platforms to look to create a solution before the 2020 Presidential election.

Responsibility of Platforms

Third parties such as major social media platforms and government officials are facing pressures to combat the spread of fake news and misinformation. The United States of America both aims to protect the 1st Amendment of Free Speech, but also ensure their people have access to correct information. Until recently, user’s have generally been faced with the responsibility to fact-check their news and evaluate it for truth up until the last couple years. Below is a cross analysis of different third parties policies regarding fact-checking.

Facebook is an online social media and social networking service founded in 2004 by Mark Zuckerberg who is still their CEO. Facebook is a part of the Technology Conglomerate known as Meta which includes Instagram and Whatsapp. The policies outlined below apply to all Meta owned applications but Facebook is at the head of the discussion of fact-checking information.

Twitter is a social networking and microblogging platform founded in 2006 by Jack Dorsey. Their CEO is currently Parag Agrawal. 80% of Twitter's user base lies outside the U.S. [5]

YouTube is an online video sharing and social media platform owned by Google. It was founded in 2005. Its CEO is Susan Wojcicki.

Fact-Check Policies

Misinformation Policies

Establishing a political misinformation policy means that platforms acknowledge disinformation when it appears and potentially enforce penalties for content/users that frequently violate their guidelines.

  • in 2016 Facebook has partnered with a third party fact check platform to implement a remove/reduce/inform policy. This means that ad content that is flagged as false is either covered with a disclaimer and followed with a fact-check. Its reach (the total number of people who see the content) is also reduced using their algorithms. Users who attempt to share or post fact-checked information are prompted with a notification informing them the information might not be correct. The scale of Meta’s platforms is currently too large for the remove/reduce/inform policy. Elected officials and certain Facebook-approved candidates are exempt from this policy and have been linked to some of the greatest sources of misinformation on the site.

Trustindicator.jpeg

  • Twitter has established a fact-checking policy that applies to information that could alter confidence in democratic elections, but does not have a widespread fact-checking program. They are currently working on launching a program that will ask users who try to retweet articles they have not opened and read if they are sure they want to retweet it [6]
  • YouTube also has a fact-checking policy against information that could alter confidence in democratic elections. They have also introduced fact-check panels to debunk popular false claims (i.e. conspiracy theories). However, this policy overall has not been enforced.

Hate Speech

Enforcing guidelines on hate speech and removing content that is intended to intimidate or dehumanize individuals or groups reduced real-world violence. All major social media companies have policies against hate speech but enforcement and accuracy against hate speech is inconsistent across platforms. Algorithms generally do not treat minority groups the same as those in the majority.

  • After the 2016 election incident, Facebook has implemented and improved on hate speech policies. Removing white nationalism, militarized social movements, and Holocaust denial from their platform has been a major goal for Facebook. However as seen by the January 6 2021 Capitol insurrection, which was largely planned on Facebook, these policies have not been enforced well.
  • Twitter adapted its hate speech policy to include tweets that target individuals or groups based on age, disability, disease, race, or ethnicity. However, as seen with Facebook these policies are not well enforced.
  • YouTube removed channels run by hate group leaders, but their recommendation algorithm still suggests racist and hateful videos. Studies have shown that YouTube facilitates right-wing radicalization by encouraging users to “go down a rabbit hole” (easy to move from mainstream to extreme content in one sitting) with their recommendation algorithms.

Anonymous Accounts

Reducing anonymous accounts on platforms reduces the ability of foreign/bad actors. This can come in the form of requiring users to confirm their identity.

  • Facebook requires authenticity for accounts and flags accounts they detect as fake.
  • Twitter does not require accounts to represent humans. In 2014 Twitter revealed 23 million of their accounts were "bots" while nearly 45% of them were active Russian accounts[7]
  • YouTube does not require accounts to be authentic. Google however, considers anonymous accounts to be low quality and they receive a low pagerank.

Recommendation Algorithms

Content recommendation algorithms are used to increase screen time. Making these algorithms transparent to users puts the responsibility onto the users a bit more.

  • Facebook has provided very little transparency into the algorithms behind their “NewsFeed”.
  • Twitter has provided very little transparency into the algorithms behind their top search results, ranked tweets, and who to follow.
  • Google’s ranking algorithm has been greatly documented. YouTube however has provided very little transparency into their recommendation algorithms.

Hacked Content

All platforms have very strict policies banning publication of hacked materials on their platform. Hacked materials are considered stolen, forged, or manipulated documents.

Conclusion

References

  1. Comparative Social Media Policy Analysis. Democrats, 27 Aug. 2021.
  2. “Fake News, Misinformation, & Fact-Checking: Ohio University MPA.” Ohio University, 17 Oct. 2019
  3. “Fake News, Misinformation, & Fact-Checking: Ohio University MPA.” Ohio University, 17 Oct. 2019
  4. https://web.stanford.edu/~gentzkow/research/fakenews.pdf
  5. Barbaro, Michael, host. “Jack Dorsey on Twitter’s Mistakes.” The Daily, NYT, 7 Aug. 2020.
  6. Barbaro, Michael, host. “Jack Dorsey on Twitter’s Mistakes.” The Daily, NYT, 7 Aug. 2020.
  7. https://www.news.com.au/technology/online/social/real-reason-fake-news-spreads-so-fast-on-facebook-and-twitter/news-story/2e2653d2315d4dd6e78e42ec9e9155be
Policy Facebook/Meta Twitter YouTube/Google
Misinformation Policy Yes Yes No
Hate Speech
Anonymous Accounts
Recommendation Algorithms
Hacked Content