Misinformation in Digital Media

From SI410
Jump to: navigation, search
News containing misinformation displayed on phone.[1]

Misinformation in digital media is a subset of misinformation, which is false or misleading information.[2] Instances of misinformation have been recorded throughout history, dating back as far as written records exist.[3] The advancement of technology in modern times resulted in digital media becoming the primary source of information for most people.[4] At the same time, it created an avenue for misinformation to spread quickly and to more people. Digital media comes in a variety of different forms, each of which is susceptible to producing misinformation in unique ways. Misinformation has the ability to affect all aspects of life, with heavy influence in societal state, politics, health, and industry.[5] The resulting decline of the overall accuracy of information produces negative consequences. Countering misinformation appears as a complicated topic since the media platforms must establish a balance between upholding free speech and preventing misinformation.[6] Users and communities, on the other hand, have much greater power when it comes to making conscious choices regarding the information they consume.[7] The development of technology targeting misinformation also contributes to the process.

History

Pre-Internet Era

Early examples of misinformation date back to 15th century Europe, where political rivals attempted to smear each other's reputation through various writings.[2] The first recorded instance of large-scale misinformation was the Great Moon Hoax, a series of six articles describing life on the Moon that The Sun published in 1835.[8] In the period before the internet age, misinformation was generally distributed through traditional media sources such as newspapers, television, and radio.[9] The traditional media often face censorship and manipulation by governments and other powerful organizations.[10] This resulted in the prevalent spread of misinformation serving the interests of those in power, suppressing public access to accurate information. Additionally, information in traditional media can become distorted over long distances due limitations in the range of communication technologies.[11] Thus, a particular piece of news might be reported one way in one region and differently in another. Overall, media in the pre-internet era is characterized by limited access to information and higher degree of control over the information by those in power.

Internet Age

The advancement of technology in the internet age has significantly changed the manner in which misinformation spreads. The broad influence of digital media and the technologies associated with it enables potential misinformation to spread rapidly.[12] Anyone with a digital device can now publish information to a large audience over the internet. The ease and speed with which information can be spread online greatly increased the volume of misinformation in the information flow.[5] During the 2016 United States presidential election, misinformation making up only 6% of overall news media reached about 40% of Americans.[13] This phenomenon puts into perspective the exponential influence of modern misinformation, which will become increasingly problematic as the amount of misinformation increases. As technology continues to improve, the forms of distribution for misinformation also expands. News media channels and websites have given way to social media, which prioritizes engagement over accuracy and further amplifies the audience.[14] In the sophisticated societal structure of modern times, the impact of misinformation thus multiplies significantly. In light of this, technology companies have taken steps to counter the spread of misinformation on their platforms.[15]

Sources of Misinformation

News Websites

In the past, the media industry provided consumers with a limited amount of offerings that were consistent in nature.[16] In contrast, consumers today have access to an abundance of news offerings targeting different groups of people.[16] As a result, consumers often choose news sources that conform with their inherent biases.[17] Additionally, news publishing companies such as The Wall Street Journal and The Atlantic developed algorithms to personalize consumer news feeds based on their stored data.[18] Such algorithms build upon the notion of consumers actively seeking biased news sources by simplifying the process for them, rendering them more susceptible to misinformation.[19] Thus, the competition for customers by digital media companies inadvertently created an environment in which misinformation can thrive. While media companies claim to be unbiased, that does not always hold true. Factors such as Fox News’ pro-Republican tendencies lead to content that contains varying degrees of misinformation.[20]

Social Media

Unlike traditional online media sources, social media values the volume of engagement over the accuracy of content.[14] As a result, its algorithms do not include extensive checks on the validity and credibility of information.[14] Some people may take advantage of this situation to knowingly spread misinformation for personal gain, such as increasing their social media following. The option to remain anonymous on social media also reduces the consequences that people face for posting misinformation.[20] The rise in popularity of social media has led many people to use it as a news source; studies have shown that almost half of all Americans do so.[21] The spread of misinformation on social media can have especially serious consequences, including impacting public opinion, influencing decision-making, and even inciting violence.[22] Thus, social media companies have started to implement tighter policies regarding the matter.[22]

Podcasts

In recent years, podcasts have become an increasingly popular source of information and entertainment for many people due to their easily accessible audio format.[23] However, they also became major sources of misinformation. Similar to social media, podcasts are often created by individuals or small groups with limited resources and no editorial oversight.[24] Additionally, controversial podcasts tend to draw an increased number of views; more views results in even more views since podcast platform algorithms prioritize displaying popular channels.[25] Thus, the encouragement of controversial content enables the participants to make false claims with little accountability. Many listeners seek out podcasts that align with their existing beliefs, and the algorithms employed by podcast platforms can further reinforce those biases by recommending similar content that contain misinformation.[2]

Advertisement appears as one of the primary sources of income for many digital media companies.[26] As such, the prevalence of advertisement on their corresponding platforms can be another key avenue for distributing misinformation. Advertisers have been known to use misleading claims, false statistics, or deceptive images to sell products or services.[27] Such misinformation causes a considerable number of consumers to make purchases that they would not have otherwise had, raising ethical concerns about the responsibility of advertisers.[28] Even though governments around the world created laws and regulations requiring advertisers to provide truthful information, their enforcement can be challenging.[28]

Ethical Concerns

Social Implications

One of the major social concerns of digital misinformation is that it can cause social divides through the distribution of false narratives, which introduces fear and mistrust among different groups of people.[28] For example, the labeling of the Coronavirus as “Chinese virus” led to an increased number of hate crimes towards Asians in the United States.[28] Situations like this creates a polarized society, which in turn makes it more challenging for politicians and constituents to find common ground in creating policies to address them. Misinformation can also contribute to the spread of conspiracy theories, which undermines public trust in each other and in the government.[29] Conspiracy theories can gain traction quickly, and become difficult to eliminate.[29] Not only can they stress the relationship between certain groups of people, but also they can result in dangerous events. For instance, conspiracy theories alleging that the 2020 United States presidential election was rigged played a role in the U.S. Capitol Riot in January of 2021.[20] As such, these forms of misinformation present concerns for their resulting societal unrest.

Political Implications

One of the major political concerns of digital misinformation is that it can lead to the erosion of public trust in the government and the media.[26] The constant circulation of misinformation renders it increasingly difficult for people to differentiate between what is true and what is not. As a result, it becomes challenging for politicians to communicate with their constituents and for policies to be implemented effectively.[11] Another concern stems from the potential of digital misinformation in influencing elections and public opinion. Misinformation can sway the beliefs of people, leading to a distorted understanding of political issues and the overall political landscape.[30] Politicians can also denounce negative information about them as misinformation, which further complicates the truth.[30] For instance, Donald Trump dismissed news stories that he did not like as fake news during the 2016 United States presidential election.[5] Thus, these forms of misinformation potentially have a significant impact on election outcomes and the stability of society. Without accurate and transparent information, the maintenance of a functional democracy becomes difficult.[29]

Health Implications

One of the major health concerns of digital misinformation is that it can prevent people from seeking necessary medical attention.[31] When misinformation about certain treatments for illnesses causes someone to refuse or delay proper medical care, it often leads to increased risk of progression and serious complications. Another concern arises from the possibility that misinformation can encourage people to adopt harmful practices.[32] Misinformation about the benefits of certain substances and activities may result in actions that expose people to unnecessary risks.[32] Misinformation can also increase the spread of infectious diseases, where the implementation of false information about their cause, transmission, and treatment leads to behaviors that increase the risk of infection and the spread of disease.[5] The COVID-19 pandemic embodies this concern, where misleading information about the vaccine and mask policy contributed to the high volume of cases.[5] Once discovered, such forms of misinformation quickly spreads among communities. Not only do they carry detrimental consequences, but also they undermine public trust in the healthcare providers.[14] The decreased trust makes it more difficult for healthcare providers to effectively treat and prevent illnesses.

Industrial Implications

One of the major industrial concerns of digital misinformation is that it can be used in industrial propaganda. [26] Through tools such as advertising, companies can distort reliable evidence and influence public belief. For instance, tobacco companies utilized misinformation to downplay the connection between smoking and lung cancer that numerous studies have proven.[33] Another concern originates from the ability of companies to gain competitive advantages with misinformation.[28] Companies can use misinformation to mislead potential customers about the benefits of their product.[34] They can also similarly downplay the success of their competitors’ products.[34] While laws exist to counter these types of behavior, there exists some loopholes that companies can take advantage of.[34] Misinformation in the financial industry can result in devastating consequences to the economy. The misleading bundling of subprime mortgages into mortgage-backed securities by banks helped trigger the 2008 Financial Crisis.[35] In a financially driven society, the distribution of misinformation for financial advantage greatly concerns researchers.[36]

Countering Misinformation

Limitations

Countering misinformation appears as a complex task that has many limitations. Since automated bots can rapidly spread high volumes of misinformation through media channels, countering it at the same speed is especially challenging.[31] Even with modern technology, the reversal process seems slow since misinformation presents itself in a way that is difficult to identify (in a manner that appears credible).[6] Since people tend to surround themselves with sources conforming to their beliefs, counteracting misinformation contained within those sources becomes extremely hard.[37] The tendency of people to dismiss even accurate information that contradicts their beliefs further contributes to the problem.[37] The legality of countering misinformation also raises many questions. In countries such as the United States, the freedom of speech appears as a fundamental right; as such, it extends to the rights of digital expression.[38] Free speech activists have argued that the removal of information, even if inaccurate, violates the basic right.[38] As a result, identifying common ground between freedom of expression and misinformation becomes a delicate subject both legally and morally.[14] Laws against misinformation only apply to specific categories such as defamation and campaign speeches, so overcoming the limitations requires a combination of education, technology, and collaboration among various stakeholders.[14]

Technological Tools

While technology plays a significant role in the spread of misinformation, it can also be used to counter it. Algorithmic fact checkers have become an increasingly prevalent tactic in eliminating misinformation.[39] Companies such as Google and Facebook have begun to implement features that automatically detect misinformation.[39] Such features typically employ machine learning algorithms trained to detect and flag false information in the early stages.[2] Fact-checking websites have also been developed to help people to identify misinformation. For instance, websites such as FactCheck.org allows users to input statements and receive analysis results.[14] They also usually contain a forum where people can inquire about the validity of certain information.[14] The fact-checking programs implemented by these websites conduct syntax tree analysis against a database of new stories and evaluate misinformation based on inconsistencies.[6] Most credible mainstream media organizations run their sources against fact-checking algorithms before release, thus resulting in a lower percentage of misinformation distributed.[6]

Community Efforts

Most modern day digital media platforms have moderators along with report and reward systems.[37] Communities of users can leverage those tools to play a major role in countering misinformation. The Reddit platform serves as an example of this type of interaction. Most subreddits contain detailed sets of rules that include those targeting false or misleading information.[40] Not only can users report posts or comments in violation of such rules, but also they can upvote or downvote them. This typically results in the correction or removal of clear misinformation.[40] The net amount of karma (points) correlate to the completeness and accuracy of content in subreddits such as r/worldnews.[41] As such, members can also check the post history and karma of posters to determine their credibility.[41] Many other digital media platforms offer similar features as countermeasures to misinformation. Studies have shown that such reward-based systems both encourage accurate information and discourage misleading information.[42] While these methods of countering misinformation have the potential to be effective, they can still produce false positives or false negatives based on the biases of the user demographic.[43]

Information Literacy

Despite the effectiveness of technology in countering misinformation, experts claim that the most useful method remains as the education of information literacy for the public.[37] Information literacy includes the ability to access, evaluate, and use information effectively and ethically. It encompasses several key skills: critical thinking, source evaluation, and media literacy.[40] An individual skilled in information literacy will evaluate digital information based on factors corresponding to the aforementioned skills.[40] One of the key factors appears as the logical structure of the information.[22] Another key factor appears as the credibility of the source, which also includes checking for supporting evidence for the presented information.[22] The final key factor appears as the presence of bias, propaganda, and other forms of manipulation.[22] The mastery of such skills enables individuals to assess the credibility and reliability of digital information and to make informed decisions based on that.[44] In the digital-driven world today, formal education programs have started to consider implementing media literacy programs that teach people to be critical consumers of information.[45]

References

  1. Brown, S. (2022, January 5). Study: Digital Literacy doesn't stop the spread of misinformation. MIT Sloan. Retrieved February 9, 2023, from https://mitsloan.mit.edu/ideas-made-to-matter/study-digital-literacy-doesnt-stop-spread-misinformation
  2. 2.0 2.1 2.2 2.3 Southwell, B. G., Thorson, E. A., & Sheble, L. (2017). The Persistence and Peril of Misinformation. American Scientist, 105(6), 372–375. http://www.jstor.org/stable/44808796
  3. Gaultney, I. B., Sherron, T., & Boden, C. (2022). Political Polarization, Misinformation, and Media Literacy. Journal of Media Literacy Education, 14(1), 59-81. https://proxy.lib.umich.edu/login?url=https://www.proquest.com/scholarly-journals/political-polarization-misinformation-media/docview/2722592570/se-2
  4. Weiss, R. (2017). EDITORIAL: Nip misinformation in the bud. Science, 358(6362), 427. https://www.jstor.org/stable/26400598
  5. 5.0 5.1 5.2 5.3 5.4 Iammarino, N. K., & O'Rourke, T. W. (2018). The Challenge of Alternative Facts and the Rise of Misinformation in the Digital Age: Responsibilities and Opportunities for Health Promotion and Education. American Journal of Health Education, 49(4), 201-205. https://doi.org/10.1080/19325037.2018.1465864
  6. 6.0 6.1 6.2 6.3 Frederick, K. (2019). The New War of Ideas: Counterterrorism Lessons for the Digital Disinformation Fight. Center for a New American Security. http://www.jstor.org/stable/resrep20399
  7. Heldt, A. (2019). Let’s Meet Halfway: Sharing New Responsibilities in a Digital Age. Journal of Information Policy, 9, 336–369. https://doi.org/10.5325/jinfopoli.9.2019.0336
  8. Yuhwa Han. (2017). The Misinformation Effect and the Type of Misinformation: Objects and the Temporal Structure of an Episode. The American Journal of Psychology, 130(4), 467–476. https://doi.org/10.5406/amerjpsyc.130.4.0467
  9. Biały, B. (2017). Social Media—From Social Exchange to Battlefield. The Cyber Defense Review, 2(2), 69–90. http://www.jstor.org/stable/26267344
  10. Santhanam, M. S. (2017). Riding on Misinformation. Economic and Political Weekly, 52(28), 4–5. http://www.jstor.org/stable/26695862
  11. 11.0 11.1 Hofstetter, C. R., Barker, D., Smith, J. T., Zari, G. M., & Ingrassia, T. A. (1999). Information, Misinformation, and Political Talk Radio. Political Research Quarterly, 52(2), 353–369. https://doi.org/10.2307/449222
  12. Roese, V. (2018). You won’t believe how co-dependent they are: Or: Media hype and the interaction of news media, social media, and the user. In P. Vasterman (Ed.), From Media Hype to Twitter Storm (pp. 313–332). Amsterdam University Press. https://doi.org/10.2307/j.ctt21215m0.19
  13. Magrani, E. (2020). Hacking the Electorate: Thoughts on Misinformation and Personal Data Protection. Konrad Adenauer Stiftung. http://www.jstor.org/stable/resrep25290
  14. 14.0 14.1 14.2 14.3 14.4 14.5 14.6 14.7 Song, Y., Wang, S., & Xu, Q. (2022). Fighting Misinformation on Social Media: Effects of Evidence Type and Presentation Mode. Health Education Research, 37(3), 185-198. https://doi.org/10.1093/her/cyac011
  15. Journell, W. (2021). Taking a Reasoned Stance against Misinformation. Phi Delta Kappan, 102(5), 12-17. https://doi.org/10.1177/0031721721992559
  16. 16.0 16.1 Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences of the United States of America, 113(3), 554–559. https://www.jstor.org/stable/26467425
  17. BESSI, A., & QUATTROCIOCCHI, W. (2015). Disintermediation: Digital Wildfires in the Age of Misinformation. AQ: Australian Quarterly, 86(4), 34–40. http://www.jstor.org/stable/24877660
  18. Wilner, A. S. (2018). Cybersecurity and its discontents: Artificial intelligence, the Internet of Things, and digital misinformation. International Journal, 73(2), 308–316. https://www.jstor.org/stable/26499689
  19. Paul, P. V. (2017). Fake News, Alternative Facts, Post-Truths, Misinformation, Misinterpretation—and Other Challenges Associated With Knowledge Generation. American Annals of the Deaf, 162(1), 3–7. https://www.jstor.org/stable/26235314
  20. 20.0 20.1 20.2 Lara-Steidel, H. (2022). 'Do Your Own Research!' Misinformation, Ignorance, and Social Media. Theory and Research in Education, 20(2), 205-209. https://doi.org/10.1177/14778785221113620
  21. Bonnet, J. L., & Rosenbaum, J. E. (2020). "Fake News," Misinformation, and Political Bias: Teaching News Literacy in the 21st Century. Communication Teacher, 34(2), 103-108. https://doi.org/10.1080/17404622.2019.1625938
  22. 22.0 22.1 22.2 22.3 22.4 Bailey, N. A., Olaguez, A. P., Klemfuss, J. Z., & Loftus, E. F. (2021). Tactics for Increasing Resistance to Misinformation. Applied Cognitive Psychology, 35(4), 863-872. https://doi.org/10.1002/acp.3812
  23. Jaeger, P. T., & Taylor, N. G. (2019). Battling Information Illiteracy: How misinformation affects the future of policy. American Libraries, 50(7/8), 32–35. https://www.jstor.org/stable/26747398
  24. KORNBLUH, K., Hurd, W., & Schroeder, C. (2020). PROTECTING DEMOCRACY AND PUBLIC HEALTH FROM ONLINE DISINFORMATION. In K. Kornbluh & S. duPont (Eds.), #Tech2021: Ideas for Digital Democracy (pp. 43–44). German Marshall Fund of the United States. http://www.jstor.org/stable/resrep28474.20
  25. Lara-Steidel, H. (2022). 'Do Your Own Research!' Misinformation, Ignorance, and Social Media. Theory and Research in Education, 20(2), 205-209. https://doi.org/10.1177/14778785221113620
  26. 26.0 26.1 26.2 Crain, M., & Nadler, A. (2019). Political Manipulation and Internet Advertising Infrastructure. Journal of Information Policy, 9, 370–410. https://doi.org/10.5325/jinfopoli.9.2019.0370
  27. Hattori, K., & Higashida, K. (2015). Who Benefits from Misleading Advertising? Economica, 82(328), 613–643. http://www.jstor.org/stable/24751974
  28. 28.0 28.1 28.2 28.3 28.4 Maréchal, N., MacKinnon, R., & Dheere, J. (2020). Targeted Advertising and COVID-19 Misinformation: A Toxic Combination. In Getting to the Source of Infodemics: It’s the Business Model: A Report from Ranking Digital Rights (pp. 13–21). New America. http://www.jstor.org/stable/resrep25417.5
  29. 29.0 29.1 29.2 Schiffrin, A. (2017). DISINFORMATION AND DEMOCRACY: THE INTERNET TRANSFORMED PROTEST BUT DID NOT IMPROVE DEMOCRACY. Journal of International Affairs, 71(1), 117–126. https://www.jstor.org/stable/26494367
  30. 30.0 30.1 Gaultney, I. B., Sherron, T., & Boden, C. (2022). Political Polarization, Misinformation, and Media Literacy. Journal of Media Literacy Education, 14(1), 59-81. https://proxy.lib.umich.edu/login?url=https://www.proquest.com/scholarly-journals/political-polarization-misinformation-media/docview/2722592570/se-2
  31. 31.0 31.1 KORNBLUH, K., Hurd, W., & Schroeder, C. (2020). PROTECTING DEMOCRACY AND PUBLIC HEALTH FROM ONLINE DISINFORMATION. In K. Kornbluh & S. duPont (Eds.), #Tech2021: Ideas for Digital Democracy (pp. 43–44). German Marshall Fund of the United States. http://www.jstor.org/stable/resrep28474.20
  32. 32.0 32.1 Irfan, A., Bieniek-Tobasco, A., & Golembeski, C. (2021). Pandemic of Racism: Public Health Implications of Political Misinformation. HPHR, 26, 1–7. https://www.jstor.org/stable/48617321
  33. Bero, L. A. (2005). Tobacco Industry Manipulation of Research. Public Health Reports (1974-), 120(2), 200–208. http://www.jstor.org/stable/20056773
  34. 34.0 34.1 34.2 Yuhwa Han. (2017). The Misinformation Effect and the Type of Misinformation: Objects and the Temporal Structure of an Episode. The American Journal of Psychology, 130(4), 467–476. https://doi.org/10.5406/amerjpsyc.130.4.0467
  35. Comiskey, M., & Madhogarhia, P. (2009). Unraveling the Financial Crisis of 2008. PS: Political Science and Politics, 42(2), 271–275. http://www.jstor.org/stable/40647525
  36. Hattori, K., & Higashida, K. (2015). Who Benefits from Misleading Advertising? Economica, 82(328), 613–643. http://www.jstor.org/stable/24751974
  37. 37.0 37.1 37.2 37.3 ZUCKER, A. (2019). Using critical thinking to counter misinformation. Science Scope, 42(8), 6–9. https://www.jstor.org/stable/26898998
  38. 38.0 38.1 Journell, W. (2021). Taking a Reasoned Stance against Misinformation. Phi Delta Kappan, 102(5), 12-17. https://doi.org/10.1177/0031721721992559
  39. 39.0 39.1 WRIGHT, L. (2015). Magic Beans and Dragons: The war against pseudoscience and misinformation. AQ: Australian Quarterly, 86(2), 10–40. http://www.jstor.org/stable/24364821
  40. 40.0 40.1 40.2 40.3 Jaeger, P. T., & Taylor, N. G. (2019). Battling Information Illiteracy: How misinformation affects the future of policy. American Libraries, 50(7/8), 32–35. https://www.jstor.org/stable/26747398
  41. 41.0 41.1 Paul, P. V. (2017). Fake News, Alternative Facts, Post-Truths, Misinformation, Misinterpretation—and Other Challenges Associated With Knowledge Generation. American Annals of the Deaf, 162(1), 3–7. https://www.jstor.org/stable/26235314
  42. BESSI, A., & QUATTROCIOCCHI, W. (2015). Disintermediation: Digital Wildfires in the Age of Misinformation. AQ: Australian Quarterly, 86(4), 34–40. http://www.jstor.org/stable/24877660
  43. Roese, V. (2018). You won’t believe how co-dependent they are: Or: Media hype and the interaction of news media, social media, and the user. In P. Vasterman (Ed.), From Media Hype to Twitter Storm (pp. 313–332). Amsterdam University Press. https://doi.org/10.2307/j.ctt21215m0.19
  44. Bonnet, J. L., & Rosenbaum, J. E. (2020). "Fake News," Misinformation, and Political Bias: Teaching News Literacy in the 21st Century. Communication Teacher, 34(2), 103-108. https://doi.org/10.1080/17404622.2019.1625938
  45. Smith, M. D. (2017). Arming students against bad information. The Phi Delta Kappan, 99(3), 56–58. http://www.jstor.org/stable/26388252