Fact Checking

From SI410
Jump to: navigation, search
Back • ↑Topics • ↑Categories

Fact checking is the process of investigating an issue or information in order to verify the facts. The process of fact-checking has become more and more relevant in the discussion of the roles that major social media companies have as news sources. According to a survey in 2018, over two-thirds of Americans get some of their news from social media. The survey reported that 43% of Americans get their news from Facebook, 21% from YouTube, and 12% from Twitter [1]. While the process of fact-checking the information one is exposed to has always been a relevant topic (i.e. using reliable sources, researching the author(s), etc.) there is now discussion surrounding which parties are responsible for fact checking information online. Responsibility could be given to the user to ensure their information comes from a reliable source or it could be the responsibility of the platforms that share this information (i.e. Facebook, Twitter, Google etc.). Fact-checking is important because exposure to misinformation can greatly influence people’s opinion and in turn their actions.

There are different types of information on the internet that might be fact-checked. Misinformation is incorrect or misleading information[2]. Misinformation is unique in the regards that the information is spread by people who don’t necessarily know the information is false. An example of misinformation is “fake news”. Fake news is a term used to describe any type of false information, both intentionally and unintentionally spread. "Fake news" is often spread quickly and widely because it catches users attention and is interesting. Disinformation is the spread of false information knowing that it is false[3]. Social media companies have considered both the harms and benefits that would result from leaving the misinformation up unchecked as well as the ethical implications.


History

Rise of Fact-Checking

The rise of the internet in the early 2000's allowed people to get their information from a wide variety of sources. People no longer had to rely on their thoroughly edited newspapers or radios to get their news. The growth of online platforms created many ethical concerns such as a concentration of power in the hands of a few large media companies, and the growing likeliness for user's to get trapped in echo chambers or filter bubbles. An echo chamber or filter bubble occurs when people get isolated from information opposing their own perspectives and viewpoints. People tend to look for information that confirms their current beliefs, not information that challenges it. This means that people either needed to fact-check the information they are exposed to for accuracy or that the distributors of the information (i.e. social media platforms) are responsible for monitoring the accuracy of information posted on their sites or notifying users that it might not be correct. Documents that Facebook released following the January 6th Capitol insurrection showed that misinformation shared by politicians is more damaging than information coming from ordinary users[4]. Because of this, fact-checking becomes an especially prevalent topic of conversation and controversy surrounding elections, political events such as the 2016 and 2020 US Presidential election, the January 6th, 2021 Capitol Insurrection, and public health issues.

2016 US Presidential Election

The term "fake news" became exponentially popular during the 2016 US Presidential Election due to the frequent use of it by President Donald Trump as well as the media. After the election, and Donald Trump's victory, controversy arose surrounding the impact of social media as a news platform. All parties involved began investigating the effect that "fake news" and misinformation had on the results of the election. Research done for the Journal of Economic Perspectives in 2017, following the 2016 election, concluded that "fake news" stories were shared more frequently than mainstream new stories, many people who were exposed to fake news reported they believed it, and that most "fake news" stories favored Donald Trump over his competitor Hillary Clinton[5]. This drove leaders in tech, government officials, and users of these platforms to look to create a solution before the 2020 Presidential election. Additionally, over seas, Russian internet trolls worked frequently through Facebook's platform to create accounts and pages that looked as if they were American's posting to sway public favor towards Trump[6].

January 6th Capitol Insurrection

Example of a tweet from January 6th with a fact-check notice

On January 6th, 2021, the US Congress assembled to confirm President Joe Biden's victory over former President Donald Trump. On this day nearly 2,000 right wing activists breached the Capitol's walls and police barriers to raid the capitol. The rioters were violent and defaced much of the Capitol. After the event, officials looked into the factors that encouraged this event, and ultimately allowed the raid to happen. At the forefront of a multitude of factors identified, were Donald Trump's many false tweets claiming he was the next elected President due to voter fraud. These tweets motivated extremists to take matters into their own hands. After the insurrection, Facebook commented that they did not act forcefully enough against the Stop the Steal movement. User reports of "fake news" reached about 40,000 posts per hour and the account reported most often for inciting violence was @realdonaldtrump, the President. Facebook also reported that the people who were to blame were the people involved in the attack on the Capitol and those who encouraged it, not necessarily Facebook and other media platforms. After January 6th, Facebook removed content with the phrase ‘stop the steal’ under their Coordinating Harm policy and suspended Trump from their platforms. This policy states that "[we] prohibit people from facilitating, organizing, promoting or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals. We allow people to debate and advocate for the legality of criminal and harmful activities, as well as draw attention to harmful or criminal activity that they may witness or experience as long as they do not advocate for or coordinate harm." Mark Zuckerberg, Facebook's CEO, also reported after the event that while Facebook did have a role in preventing the spread of information that incited violence, many possible mitigation tactics would cause too many "false positive" and stop people from engaging with its platform[7].

COVID-19 Pandemic

The COVID-19 Pandemic, as a major public health crisis, has been very prominent in the news and over social media. It has been a target of misinformation partially due to the complicating factor that the official advice regarding COVID-19 is constantly changing as officials learn more about it. Instances of disinformation such as blaming racial groups, and misinformation such as false treatment remedies have been spreading over the internet since the pandemic's start in 2019[8]. Facebook has publicly made a commitment to connect people with reliable COVID-19 information and to limit the spread of COVID-19 hoaxes and misinformation. Facebook launched the COVID-19 Information Center which is featured at the top of the News Feed on Facebook and shows educational pop-ups with information from the World Health Organization (WHO) and Center for Disease Control (CDC). In terms of removing misinformation, Facebook has committed to removing COVID-19 information that could contribute to imminent physical harm such as claims that physical distancing doesn't help prevent the spread of COVID[9]. For claims that don't contribute to physical harm such as conspiracy theories, if the claims are proven false by Facebook's fact-checkers the distribution of the post will be reduced and it will be accompanied by strong warning labels to those who come across it.

Responsibility of Platforms

Third parties such as major social media platforms and government officials are facing pressures to combat the spread of fake news and misinformation. The United States of America both aims to protect the 1st Amendment of Free Speech, but also ensure their people have access to correct information. Until recently, user’s have generally been faced with the responsibility to fact-check their news and evaluate it for truth. Instances such as the 2016 President election have shown how sometimes fact-checking might be too difficult for user's to do on their own because of trolls, bots, and the amount of misinformation. There are ethical implications for both allowing the spread of misinformation, but also infringing on people's right to free speech. Below is a cross analysis of major third parties policies regarding fact-checking.

Platfroms

Facebook is an online social media and social networking service founded in 2004 by Mark Zuckerberg who has just recently announced the companies transition to its new name 'Meta'. Meta is a part of the technology conglomerate which includes Facebook, Instagram, and Whatsapp. The policies outlined below apply to all Meta owned applications but Facebook is at the head of the discussion of fact-checking information.

Twitter is a social networking and microblogging platform founded in 2006 by Jack Dorsey. Their CEO is currently Parag Agrawal. 80% of Twitter's user base lies outside the U.S. [10]

YouTube is an online video sharing and social media platform owned by Google. It was founded in 2005. Its CEO is Susan Wojcicki.

Fact-Check Policies

Below is a summery of different platform's attitudes and policies towards fact-checking:[11]

Platform Misinformation Policy Examples
Facebook/Meta Facebook generally identifies misinformation after it spreads and a fact-check is requested. It refers it to third party fact-checkers before applying a label of warning to posts. Punishments are threatened for repeat offenders. October 6th, 2020: Facebook takes down Trump post comparing COVID-19 to the flu. Trump covid tweet.jpg
Twitter Twitter has no comprehensive misinformation policies. Twitter will label or remove manipulated media, or anything intended to misinform or interfere with elections or other civic processes. Twitter categorize tweets as either misleading, disputed, or unverified before fact-checking. Exceptions have been made for key elected officials. May 27th, 2020: Twitter fact-checks Trump's tweet for the first time.[12]
YouTube/Google YouTube provides information from third-party fact-checkers on specific searches in some countries to give context to help users make their own informed decisions about videos they watch on YouTube. Specific videos are not fact-checked[13]. April 28th, 2020: YouTube begins fact-checking videos in the United States.[14]

Misinformation Policies

Establishing a political misinformation policy means that platforms acknowledge disinformation when it appears and potentially enforce penalties for content/users that frequently violate their guidelines.

Example of Facebook's fact-checking policy notification
  • in 2016 Facebook partnered with a third party fact check platform to implement a remove/reduce/inform policy. This means that ad content that is flagged as false is covered with a disclaimer and followed with a fact-check. The post's reach (the total number of people who see the content) is also reduced using their algorithms. Users who attempt to share or post fact-checked information are prompted with a notification informing them the information might not be correct. The scale of Meta’s platforms is currently too large for the remove/reduce/inform policy. Elected officials and certain Facebook-approved candidates are exempt from this policy and have been linked to some of the greatest sources of misinformation on the site. Overall, Facebook has a very strong stance that social media shouldn't fact-check political speech[15].
  • Twitter has established a fact-checking policy that applies to information that could alter confidence in democratic elections, but does not have a widespread fact-checking program. In May of 2020, Twitter fact-checked a Trump tweet for the first time on mail-in-voting. Twitter followed by putting labels on certain tweets of Trump's that violated company policies [16]. Twitter is also currently working on launching a program that will ask users who try to retweet articles they have not opened and read if they are sure they want to retweet it [17].
  • YouTube also has a fact-checking policy against information that could alter confidence in democratic elections. YouTube has also introduced fact-check panels to debunk popular false claims (i.e. conspiracy theories). Overall, this policy has not been enforced.

Hate Speech

Enforcing guidelines on hate speech and removing content that is intended to intimidate or dehumanize individuals or groups aims to reduced real-world violence. All major social media companies have policies against hate speech but enforcement and accuracy against hate speech is inconsistent across platforms. Algorithms generally do not treat minority groups the same as those in the majority.

  • After the 2016 election incident, Facebook has implemented and improved on hate speech policies. Removing white nationalism, militarized social movements, and Holocaust denial from their platform has been a major goal for Facebook. However as seen by the January 6, 2021 Capitol insurrection, which was largely planned on Facebook, these policies have not been fully enforced.
  • Twitter adapted its hate speech policy to include tweets that target individuals or groups based on age, disability, disease, race, or ethnicity. As can be seen with Facebook these policies are not fully enforced.
  • YouTube removed channels run by hate group leaders, but their recommendation algorithms will still suggests videos with hate speech and racist videos. Studies have shown that YouTube facilitates right-wing radicalization by encouraging users to “go down a rabbit hole” (easy to move from mainstream to extreme content in one sitting) with their recommendation algorithms.[18]

Anonymous Accounts

Reducing anonymous accounts on platforms reduces the ability of foreign/bad actors. This can come in the form of requiring users to confirm their identity. The large amount of Russian troll accounts during the 2016 election is an example of why companies have begun forming policies surrounding anonymous accounts.

  • Facebook requires authenticity for accounts and flags accounts they detect as fake.
  • Twitter does not require accounts to represent humans. In 2014 Twitter revealed 23 million of their accounts were "bots" while nearly 45% of them were active Russian accounts[19]
  • YouTube does not require accounts to be authentic. Google however, considers anonymous accounts to be low quality and they receive a low PageRank. PageRank is the algorithm Google uses to rank web pages in their search engine.

Recommendation Algorithms

Content recommendation algorithms are used to increase screen time which is important to social media companies. Increased screen time means an increase in profits. Generally, social media companies have not made their algorithms transparent to users.

  • Facebook has provided very little transparency into the algorithms behind their “NewsFeed”.
  • Twitter has provided very little transparency into the algorithms behind their top search results, ranked tweets, and who to follow.
  • Google’s ranking algorithm has been greatly documented. YouTube however has provided very little transparency into their recommendation algorithms.

Hacked Content

All platforms have very strict policies banning publication of hacked materials on their platform. Hacked materials are considered stolen, forged, or manipulated documents.

Ethical Concerns of Fact-Checking

There are many ethical concerns regarding the implementation of fact-checking, but there are also ethical concerns in allowing the spread of misinformation and disinformation on social media platforms.

1. Many claims are not as simple as a true or false answer. Information that is flagged to be fact-checked can be complicated and convoluted not yielding to a simple true or false.

  • People often see different things when looking at the same information. People's opinions can be impacted by their political affiliation, religious beliefs, life experiences among many other things. Determining the validity of the information that is flagged could be impacted by the background of the person/team doing the fact checking.
  • For instance, a political claim that is fact-checked by a partisan team might have a different result than one fact-checked by a bi-partisan team.
  • People can do research of their own to determine the validity of claims they see online

2. Fact-checking can infringe on social media as a platform that gives people a voice and free expression.

  • In an interview with CNBC in 2017, Zuckerberg stated, "But overall, including compared to some of the other companies, we try to be more on the side of giving people a voice and free expression."[20]
  • Social media companies are not the “arbiter of truth”[21]
  • Many decisions from social media companies to fact-check and demote pieces of misinformation drew more attention to its content[22]

3. Misinformation can incite violence, cause personal harm, and lead to voter suppression

  • Feiner v. People of State of New York set precedent that speech that is considered an incitement to riot and creates a clear and present danger of causing a disturbance of the peace, is not protected by the First Amendment[23].
  • Zuckerberg is firm in his commitment that "no one is allowed to use Facebook to cause violence or harm to themselves or to post misinformation that could lead to voter suppression."[24]
  • Twitter's Jack Dorsey stated in an interview that "Twitter had a responsibility to intervene — specifically with regard to language that encourages violence or voter suppression or that challenges electoral integrity." [25]
  • Social media platforms are not public forums, so they are not protected fully by the First Amendment. This means that those who post on social media platforms do not have the unchecked right to free speech on platforms.
  • Langdon v. Google, Inc. created the precedent that if an advertisement is shown to be misleading or unlawful, a restriction on that speech is permissible but not mandatory[26]

4. Social media companies need to maintain neutrality[27]

  • If platforms removed articles from their platforms because they did not agree with claims they would loose a large part of their customer-base
  • Many major platforms are so popular because they have a diverse user base
  • Social media companies are a business. They aim to maximize their profits by attracting a large set of users.
  • many users come to social media to have discourse with users who have opposing opinions

5. Users are responsible for fact-checking their information

  • Users need to use their own research methods to determine the validity of claims they see online
  • It is the social media's responsibility to provide a place for the information, but up to the user what to do with their information.


Conclusion

There are both positive and negative consequences to technology platforms creating and enforcing various fact-checking policies versus relying on users to fact-check their information. The range of consequences is what has made fact-checking policies an ethical debate. The positive consequences of platforms such as Facebook, Twitter, and YouTube implementing fact-check policies are that user's are less likely to be exposed to misinformation and "fake news". This can prevent users from getting trapped in echo chambers and filter bubbles where they can easily develop extremist views that can turn to be dangerous to both themselves and those around them. The term "not everything you see on the internet" is now truer than ever because of the sheer amount of information on the internet, but this also makes it extremely difficult for users to discern what is true and what is not. The negative consequences of major news platforms implementing fact-check policies are that user's might feel their rights are being infringed upon. Freedom of speech, including freedom of the press are values that The United States value and are crucial to what makes them a free country. People go on social media to be exposed to opposing views and diverse perspectives, and fact-checking reduces that to an extent.

References

  1. Comparative Social Media Policy Analysis. Democrats, 27 Aug. 2021. Retrieved January 26, 2022
  2. “Fake News, Misinformation, & Fact-Checking: Ohio University MPA”. Ohio University, 17 Oct. 2019. Retrieved January 26, 2022
  3. “Fake News, Misinformation, & Fact-Checking: Ohio University MPA”. Ohio University, 17 Oct. 2019. Retrieved January 26, 2022
  4. Timberg, Craig, et al. “Inside Facebook, Jan. 6 Violence Fueled Anger, Regret over Missed Warning Signs”. The Washington Post, WP Company, 29 Oct. 2021. Retrieved January 26, 2022
  5. Allcot, Hunt, etc al. “Social Media and Fake News in the 2016 Election”. Journal of Economic Perspectives, 2017. Retrieved January 26, 2022
  6. O'Sullivan, Donie, et al. “Not Stopping 'Stop the Steal:' Facebook Papers Paint Damning Picture of Company's Role in Insurrection”. CNN, Cable News Network, 24 Oct. 2021. Retrieved January 26, 2022
  7. Timberg, Craig, et al. “Inside Facebook, Jan. 6 Violence Fueled Anger, Regret over Missed Warning Signs”. The Washington Post, WP Company, 29 Oct. 2021. Retrieved January 26, 2022
  8. Nyilasy, Greg. “Fake News in the Age of COVID-19”. University of Melbourne. Retrieved February 11, 2022
  9. Clegg, Nick. “Combating COVID-19 Misinformation Information Across our Apps”. Meta, 25 March 2020. Retrieved February 11, 2022
  10. Barbaro, Michael, host. “Jack Dorsey on Twitter’s Mistakes”. The Daily, NYT, 7 Aug. 2020. Retrieved January 26, 2022
  11. Comparative Social Media Policy Analysis. Democrats, 27 Aug. 2021. Retrieved January 26, 2022
  12. Rich, Timothy S., et al. “Research Note: Does the Public Support Fact-Checking Social Media? It Depends Whom and How You Ask: HKS Misinformation Review”. Misinformation Review, 6 Feb. 2021. Retrieved January 26, 2022
  13. ​​“See Fact Checks in YouTube Search Results - Youtube Help”. Google, Google. Retrieved January 27, 2022
  14. Rich, Timothy S., et al. “Research Note: Does the Public Support Fact-Checking Social Media? It Depends Whom and How You Ask: HKS Misinformation Review”. Misinformation Review, 6 Feb. 2021. Retrieved January 26, 2022
  15. Rich, Timothy S., et al. “Research Note: Does the Public Support Fact-Checking Social Media? It Depends Whom and How You Ask: HKS Misinformation Review”. Misinformation Review, 6 Feb. 2021. Retrieved January 26, 2022
  16. Rich, Timothy S., et al. “Research Note: Does the Public Support Fact-Checking Social Media? It Depends Whom and How You Ask: HKS Misinformation Review”. Misinformation Review, 6 Feb. 2021. Retrieved January 26, 2022
  17. Barbaro, Michael, host. “Jack Dorsey on Twitter’s Mistakes”. The Daily, NYT, 7 Aug. 2020. Retrieved January 26, 2022
  18. Rich, Timothy S., et al. “Research Note: Does the Public Support Fact-Checking Social Media? It Depends Whom and How You Ask: HKS Misinformation Review”. Misinformation Review, 6 Feb. 2021. Retrieved January 26, 2022
  19. Dudley-Nicholson, Jennifer. “The Real Reasons Fake News Spreads on Social Media”. News, News.com.au - Australia's Leading News Site, 26 June 2017. Retrieved January 27, 2022
  20. Rodriguez, Salvador. “Mark Zuckerberg says social networks should not be fact-checking political speech”. CNBC, May 28, 2021. Retrieved February 11, 2022
  21. Rodriguez, Salvador. “Mark Zuckerberg says social networks should not be fact-checking political speech”. CNBC, May 28, 2021. Retrieved February 11, 2022
  22. Davis, Courtney. “The Ethics of Making on the Handling the New York Post “October Surprise”. Markkula Center for Applied Ethics at Santa Clara University, February 5, 2021. Retrieved February 11, 2022
  23. Pinkus, Brett. “The Limits of Free Speech in Social Media”. UNT Dallas College of Law, April 26, 2021. Retrieved February 11, 2022
  24. Rodriguez, Salvador. “Mark Zuckerberg says social networks should not be fact-checking political speech”. CNBC, May 28, 2021. Retrieved February 11, 2022
  25. Barbaro, Michael, host. “Jack Dorsey on Twitter’s Mistakes”. The Daily, NYT, 7 Aug. 2020. Retrieved January 26, 2022
  26. Pinkus, Brett. “The Limits of Free Speech in Social Media”. UNT Dallas College of Law, April 26, 2021. Retrieved February 11, 2022
  27. Davis, Courtney. “The Ethics of Making on the Handling the New York Post “October Surprise”. Markkula Center for Applied Ethics at Santa Clara University, February 5, 2021. Retrieved February 11, 2022