Difference between revisions of "Misinformation in Digital Media"

From SI410
Jump to: navigation, search
(Community Efforts)
(Technological Tools)
Line 33: Line 33:
 
Countering misinformation appears as a complex task that has many limitations. Since automated bots can rapidly spread high volumes of misinformation through media channels, countering it at the same speed is especially challenging. Even with modern technology, the reversal process seems slow since misinformation presents itself in a way that is difficult to identify (presented in a way that appears credible). Since people tend to surround themselves with sources conforming to their beliefs, counteracting misinformation contained within those sources becomes extremely hard. The tendency of people to dismiss even accurate information that contradicts their beliefs contributes to the problem. The legality of countering misinformation also raises many questions. In countries such as the United States, the freedom of speech appears as a fundamental right; as such, it extends to the rights of digital expression. Free speech activists have argued that the removal of information, even if inaccurate, violates the basic right. As a result, identifying common ground between freedom of expression and misinformation becomes a delicate subject both legally and morally. Laws against misinformation only apply to specific categories such as defamation and campaign speeches, so overcoming the limitations requires a combination of education, technology, and collaboration among various stakeholders.
 
Countering misinformation appears as a complex task that has many limitations. Since automated bots can rapidly spread high volumes of misinformation through media channels, countering it at the same speed is especially challenging. Even with modern technology, the reversal process seems slow since misinformation presents itself in a way that is difficult to identify (presented in a way that appears credible). Since people tend to surround themselves with sources conforming to their beliefs, counteracting misinformation contained within those sources becomes extremely hard. The tendency of people to dismiss even accurate information that contradicts their beliefs contributes to the problem. The legality of countering misinformation also raises many questions. In countries such as the United States, the freedom of speech appears as a fundamental right; as such, it extends to the rights of digital expression. Free speech activists have argued that the removal of information, even if inaccurate, violates the basic right. As a result, identifying common ground between freedom of expression and misinformation becomes a delicate subject both legally and morally. Laws against misinformation only apply to specific categories such as defamation and campaign speeches, so overcoming the limitations requires a combination of education, technology, and collaboration among various stakeholders.
  
=== Technological Tools ===  
+
=== Technological Tools ===
 +
While technology plays a significant role in the spread of misinformation, it can also be used to counter it. Algorithmic fact checkers have become an increasingly prevalent tactic in eliminating misinformation. Companies such as Google and Facebook have begun to implement features that automatically detect misinformation. Such features typically employ machine learning algorithms trained to detect and flag false information in the early stages. Fact-checking websites have also been developed to help people to identify misinformation. For instance, websites such as FactCheck.org allows users to input statements and receive analysis results. They also usually contain a forum where people can inquire about the validity of certain information. The fact-checking programs implemented by these websites conduct syntax tree analysis against a database of new stories and evaluate misinformation based on inconsistencies. Most credible mainstream media organizations run their sources against fact-checking algorithms before release, thus resulting in a lower percentage of misinformation distributed.
 +
 
 
=== Community Efforts ===
 
=== Community Efforts ===
 
Most modern day digital media platforms have moderators along with a report and reward feature. Thus, communities of users can play a major role in countering misinformation. The Reddit platform serves as an example of this type of interaction. Most subreddits contain detailed sets of rules that include those targeting false or misleading information. Not only can users report posts or comments in violation of such rules, but also they can upvote or downvote them. This typically results in content correction or removal. The net amount of karma (points) correlate to the completeness and accuracy of content in subreddits such as r/worldnews. As such, members can also check the post history and karma of posters to determine their credibility. Many other digital media platforms offer similar features as countermeasures to misinformation. Studies have shown that such reward-based systems both encourage accurate information and discourage misleading information. While these methods of countering misinformation have the potential to be effective, they can still produce false positives or false negatives based on the biases of the user demographic.
 
Most modern day digital media platforms have moderators along with a report and reward feature. Thus, communities of users can play a major role in countering misinformation. The Reddit platform serves as an example of this type of interaction. Most subreddits contain detailed sets of rules that include those targeting false or misleading information. Not only can users report posts or comments in violation of such rules, but also they can upvote or downvote them. This typically results in content correction or removal. The net amount of karma (points) correlate to the completeness and accuracy of content in subreddits such as r/worldnews. As such, members can also check the post history and karma of posters to determine their credibility. Many other digital media platforms offer similar features as countermeasures to misinformation. Studies have shown that such reward-based systems both encourage accurate information and discourage misleading information. While these methods of countering misinformation have the potential to be effective, they can still produce false positives or false negatives based on the biases of the user demographic.

Revision as of 02:58, 11 February 2023

News containing misinformation displayed on phone.[1]

Misinformation in digital media is a subset of misinformation, which is false or misleading information. Instances of misinformation have been recorded throughout history, dating back as far as written records exist. The advancement of technology in modern times resulted in digital media becoming the primary source of information for most people. At the same time, it created an avenue for misinformation to spread quickly and to more people. Digital media comes in a variety of different forms, each of which is susceptible to producing misinformation in unique ways. Misinformation has the ability to affect all aspects of life, with heavy influence in societal state, politics, health, and industry. The resulting decline of the overall accuracy of information results in negative consequences. Countering misinformation appears as a complicated topic since the media platforms must establish a balance between upholding free speech and preventing misinformation. Users and communities, on the other hand, have much greater power when it comes to making conscious choices regarding the information they consume. The development of technology targeting misinformation also contributes to the process. In recent years, the topic of misinformation has become a source of debates due to the complicated relationship between its influence and regulation.

History

Pre-Internet Era

Early examples of misinformation date back to 15th century Europe, where political rivals attempted to smear each other's reputation through various writings. The first recorded instance of large-scale misinformation was the Great Moon Hoax, a series of six articles describing life on the Moon that The Sun published in 1835. Before the internet age, misinformation was distributed through traditional media sources such as newspapers, television, and radio. The traditional media often face censorship and manipulation by governments and other powerful organizations. This resulted in the prevalent spread of misinformation serving the interests of those in power, suppressing public access to accurate information. Additionally, information in traditional media can become distorted over long distances due limitations in the range of communication technologies. Thus, a particular piece of news might be reported one way in one region and differently in another. Overall, media in the pre-internet era is characterized by limited access to information and higher degree of control over the information by those in power.

Internet Age

The advancement of technology in the internet age has significantly changed the manner in which misinformation spreads. The broad influence of digital media and the technologies associated with it enables potential misinformation to spread rapidly. Additionally, anyone with a digital device can now publish information to a large audience over the internet. The ease and speed with which information can be spread online created an avenue for increased amounts of misinformation. During the 2016 United States presidential election, misinformation making up only 6% of overall news media reached about 40% of Americans. This phenomenon puts into perspective the exponential influence of modern misinformation, which will become increasingly problematic as the amount of misinformation increases. As technology continues to improve, the forms of distribution for misinformation also expands. News media channels and websites have given way to social media, which prioritizes engagement over accuracy and further amplifies the audience. In the sophisticated societal structure of modern times, the impact of misinformation multiplies significantly. As a result, technology companies have taken steps to address the spread of misinformation on their platforms.

Sources of Misinformation

News Media

Social Media

Podcasts

In the past, the news media industry offered consumers a limited amount of news offerings that were consistent in nature. In contrast, consumers today have access to an abundance of news offerings targeting different groups of people. As a result, consumers often choose news sources that conform with their inherent biases[2]. This trend dramatically increases the likelihood that consumers receive misinformed information. Instead of using traditional online news sources, a high percentage of Americans began using social media as their main news source in recent years[3]. Since social media has less requirements for posts than traditional news sources, it is thus also more likely to for people to both spread and receive misinformation on the plethora of platforms. Companies such as Google and Yahoo developed algorithms to personalize consumers' news feeds based on their interest and beliefs, resulting in two people concurrently searching for the same thing receiving unique, customized results[4]. Such algorithms in news functions build upon the notion of consumers actively selecting news sources matching their biases by simplifying the process for them, thus rendering consumers more susceptible to specific types misinformation that appeal to them. Thus, the competition for customers by news media companies inadvertently created an environment in which misinformation can thrive without much resistance.

Ethical Concerns

Social Implications

One of the major social concerns of digital misinformation is that it can cause social divides through the distribution of false narratives, which introduces fear and mistrust among different groups of people. For example, the labeling of the Coronavirus as “Chinese virus” led to an increased number of hate crimes towards Asians in the United States. Situations like this creates a polarized society, which in turn makes it more challenging for politicians and constituents to find common ground in addressing them. Misinformation can also contribute to the spread of conspiracy theories, which undermines public trust in each other and in the government. Conspiracy theories can gain traction quickly, and become difficult to eliminate. Not only can they stress the relationship between certain groups of people, but also it can result in dangerous events. For instance, conspiracy theories alleging that the 2020 United States presidential election was rigged played a role in the U.S. Capitol Riot in January of 2021. As such, these forms of misinformation present concerns for their resulting societal unrest.

Political Implications

One of the major political concerns of digital misinformation is that it can lead to the erosion of public trust in the government and the media. The constant circulation of misinformation renders it increasingly difficult for people to differentiate between what is real and what is not. As a result, it becomes challenging for politicians to communicate with their constituents and for policies to be implemented effectively. Another concern stems from the potential of digital misinformation in influencing elections and public opinion. Misinformation can sway the beliefs of people, leading to a distorted understanding of political issues. Politicians can also denounce negative information about them as misinformation, which complicates the truth. For instance, Donald Trump dismissed news stories that he did not like as fake news during the 2016 United States presidential election. Thus, these forms of misinformation potentially have a significant impact on election outcomes and the stability of society. Without accurate and transparent information, the maintenance of a healthy and functional democracy becomes difficult.

Health Implications

One of the major health concerns of digital misinformation is that it can prevent people from seeking necessary medical attention. When misinformation about certain treatments for illnesses causes someone to refuse or delay proper medical care, it often leads to increased risk of progression and serious complications. Another concern arises from the possibility that misinformation can encourage people to adopt harmful practices. Misinformation about the benefits of certain substances and activities may result in actions that expose people to unnecessary risks. Misinformation can also increase the spread of infectious diseases, where the implementation of false information about their cause, transmission, and treatment leads to behaviors that increase the risk of infection and the spread of disease. The COVID-19 pandemic embodies this concern, where misleading information about the vaccine and mask policy contributed to the high volume of cases. Once discovered, such forms of misinformation quickly spreads among communities. Not only do they carry detrimental consequences, but also they undermine public trust in the healthcare providers. The decreased trust makes it more difficult for healthcare providers to effectively treat and prevent illnesses.

Industrial Implications

One of the major industrial concerns of digital misinformation is that it can be used in industrial propaganda. Through tools such as advertising, companies can distort reliable evidence and influence public belief. For instance, tobacco companies utilized misinformation to downplay the connection between smoking and lung cancer that numerous studies have proven. Another concern originates from the ability of companies to gain competitive advantages with misinformation. Companies can use misinformation to mislead potential customers about the benefits of their product. They can also similarly downplay the success of their competitors’ products. While laws exist to counter these types of behavior, there exists some loopholes that companies can take advantage of. Misinformation in the financial industry can result in devastating consequences to the economy. The misleading bundling of subprime mortgages into mortgage-backed securities by banks helped trigger the 2008 Financial Crisis. In a financially driven society, the distribution of misinformation for financial advantage greatly concerns researchers.

Countering Misinformation

Limitations

Countering misinformation appears as a complex task that has many limitations. Since automated bots can rapidly spread high volumes of misinformation through media channels, countering it at the same speed is especially challenging. Even with modern technology, the reversal process seems slow since misinformation presents itself in a way that is difficult to identify (presented in a way that appears credible). Since people tend to surround themselves with sources conforming to their beliefs, counteracting misinformation contained within those sources becomes extremely hard. The tendency of people to dismiss even accurate information that contradicts their beliefs contributes to the problem. The legality of countering misinformation also raises many questions. In countries such as the United States, the freedom of speech appears as a fundamental right; as such, it extends to the rights of digital expression. Free speech activists have argued that the removal of information, even if inaccurate, violates the basic right. As a result, identifying common ground between freedom of expression and misinformation becomes a delicate subject both legally and morally. Laws against misinformation only apply to specific categories such as defamation and campaign speeches, so overcoming the limitations requires a combination of education, technology, and collaboration among various stakeholders.

Technological Tools

While technology plays a significant role in the spread of misinformation, it can also be used to counter it. Algorithmic fact checkers have become an increasingly prevalent tactic in eliminating misinformation. Companies such as Google and Facebook have begun to implement features that automatically detect misinformation. Such features typically employ machine learning algorithms trained to detect and flag false information in the early stages. Fact-checking websites have also been developed to help people to identify misinformation. For instance, websites such as FactCheck.org allows users to input statements and receive analysis results. They also usually contain a forum where people can inquire about the validity of certain information. The fact-checking programs implemented by these websites conduct syntax tree analysis against a database of new stories and evaluate misinformation based on inconsistencies. Most credible mainstream media organizations run their sources against fact-checking algorithms before release, thus resulting in a lower percentage of misinformation distributed.

Community Efforts

Most modern day digital media platforms have moderators along with a report and reward feature. Thus, communities of users can play a major role in countering misinformation. The Reddit platform serves as an example of this type of interaction. Most subreddits contain detailed sets of rules that include those targeting false or misleading information. Not only can users report posts or comments in violation of such rules, but also they can upvote or downvote them. This typically results in content correction or removal. The net amount of karma (points) correlate to the completeness and accuracy of content in subreddits such as r/worldnews. As such, members can also check the post history and karma of posters to determine their credibility. Many other digital media platforms offer similar features as countermeasures to misinformation. Studies have shown that such reward-based systems both encourage accurate information and discourage misleading information. While these methods of countering misinformation have the potential to be effective, they can still produce false positives or false negatives based on the biases of the user demographic.

Information Literacy

References

  1. Brown, S. (2022, January 5). Study: Digital Literacy doesn't stop the spread of misinformation. MIT Sloan. Retrieved February 9, 2023, from https://mitsloan.mit.edu/ideas-made-to-matter/study-digital-literacy-doesnt-stop-spread-misinformation
  2. Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest, 13(3), 106–131. http://www.jstor.org/stable/23484653
  3. Heldt, A. (2019). Let’s Meet Halfway: Sharing New Responsibilities in a Digital Age. Journal of Information Policy, 9, 336–369. https://doi.org/10.5325/jinfopoli.9.2019.0336
  4. ZUCKER, A. (2019). Using critical thinking to counter misinformation. Science Scope, 42(8), 6–9. https://www.jstor.org/stable/26898998