Disclosive Ethics

From SI410
Revision as of 02:40, 28 April 2019 by Tbourani (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Philip Brey's Proposal for Information ethics found in Chapter 3

Disclosive ethics refers to an exploratory subfield of information ethics founded by Philip Brey in 2000[1]. The methodologies used in disclosive ethics focus on analyzing ethical issues in information driven atmospheres that pertain to perceived normative environments in technology. The ethical issues that arise in information systems in disclosive ethics may be implicitly or explicitly added into the designs of the specific information system through designer's intent or unintended consequence of user engagement.[2]

One of the foundations to disclosive thinking is the ability to approach perceived normative environments with a level of skepticism in order to expose the unperceived biases of the designers of a specific informational system. Designer bias that may have been embedded into the creation process of the information system, has the ability to shape the way a user interacts with a system. This can affect the individuals societal cultural belief systems and the societal structures influenced by technologies aiding online human interaction and decision making.[1] This social implication is one of the fundamental reasons behind Brey's proposal to create a subfield of computer ethics exclusively focusing on broadening the scope of moral analysis within computer practices. This subfield would help to address moral issues that have not been exposed.

Disclosive ethics is a deviation from the current standard model of applied ethics, which fails to account for morally opaque ethical issues in perceived normative information systems. Disclosive Ethics acts as a complementary approach to expanding the standard model of applied ethics and developing its limitations. There are a number of ethical implications in which disclosive ethics can be applied including search engines, algorithms, virtual reality, online social movements, and black bodies.

Foundational Principles

Morally Opaque Practices

Morally opaque practices (or the existence thereof in computer related practices) are a set of computational practices and values embedded within the design of a system that user’s of the system may be aware or unaware of in an online information atmosphere. This creates an environment of perceived moral neutrality on the front end of an online platform although this may not be the case. The existence of a morally opaque practice can theoretically be traced back to two probable causes within the initial design of a system[1]. The principle of the existence of a morally opaque practice within the design of a system is one of the foundational beliefs of disclosive ethics.

These ideas relate closely to value centered design, which could be considered "a process where the design choices are made to maximize system value rather than to meet performance requirements" [3]. An example of this is trying to report sexual harassment on major social media platforms. Major social media platforms don't want to be known as being a common place for sexual harassment so the hide their report tool under other tabs and pages. By making this decision to not displace these feature they are making their values affect others. When companies make other features easily accessible they are saying that these matter more. The idea behind value centered design is to design a webpage with the right ethical intent.

Implied knowledge

The extent of moral opacity related to this computer related practice is based off of implied knowledge or unknown knowledge that a front end user may have about the design of a platform. This morally opaque practice can act as a hidden practice although there may be a slight awareness in the user using the system. The user is left unaware of the logistics of the running practice because there is a lack of social outcry directed towards the practice within media and the user's knowledge of the use of a system does not expose any direct attention to the opaque algorithms effects on their personhood. [4].

Undetectable Embedded Practices

A practice is built into the design of a system, through algorithms or a set rule of conditions, so that the user may appear to be using the system in a normal fashion but this normal fashion may be tainted when used outside the bounds of a embedded virtues in the design process that designers perceive as protecting a societal virtue. When a computer system has potential biases that are left unaddressed, intentionally or unintentionally, as the use of that system becomes more embedded into the standard of normal daily interaction within the world, the underlying bias becomes more persuasive as the collective norm.[2] In this fashion, the system appears to be value neutral and no apparent or implied knowledge of the practice can be detected by the user.

Ethical Implications

Search engines

A user of a search engine may have implied knowledge that search engine criterias have the embedded value of popular opinion and proactive search suggestions. The built in trend capturing algorithms implemented to accommodate these values have a tendency to “perpetuate the status quo”through data analysis driven by online traffic.[5] This inserts societies pre existing biases into the search engine system unconsciously through the act of user’s interaction with the system. Although the organization never intended to perpetuate the pre existing bias within the framework of the system’s creation, the algorithm of the system becomes decontextualized from the system user’s interactions, a concept known as emerging bias.[2]

Algorithm Bias

These algorithms, which tend to deafen the voices of more alternative viewpoints in mass media through data analysis, are created with the intent to bring more popular topics and websites to the top of the search algorithm queue. Most of the data packets these systems are relying on come from humans with inherently “biased data” learned from a specific society[6][7]. Users are implying that they have knowledge about the value inherently built into these systems, but this acts as a moral vice in creating moral neutrality on a specific search engine. The implied morally opaque practice becomes the new norm.

Algorithm Protection

Since the growing concern in society for individuals to have access to certain information, such as issues pertain to self harm, companies in charge of creating these search engine criterias have put in an effort to curb access the these results without restricting the public's ability to search for keywords. Search engines have inherent set of conditions that prevent users from being able to obtain information and instructions through the act of flagging these keywords in the system as harmful and restricts the user’s ability to obtain content related to the keyword. [8]

Virtual Reality

Inside of his most popular work, "The Handbook of Information and Computer Ethics", Brey dedicates an entire chapter to a number of emerging ethical issues that have accompanied rising virtual reality technology. One question Brey poses to the reader is whether or not the distinction of what is real versus what is a simulation is slowly disappearing. Brey uses the example of altered digital media such as photos and videos as an example of the ways in which technology can be used to fabricate nearly indistinguishable representations of reality. One concern is whether these representations, as a substitute for reality, may lead to an over-investment into virtual environments among the general population, and consequently, a disinvestment in people and activities in real life. He notes that while investments in virtual environments may come at no loss to the person who is making them, it is likely it may be a loss for others who are affected by it.[1]

Online Social Movements

Online social movements, stem from political mobilization efforts that started appearing on the internet in 1996. These organizations in political arenas, used online tools of mobilization reach voters who were outside the current efforts being used to mobilize voters.[4] Online social movements, are built on the foundation of instigating social change though the act of creating an online environment that shifts the power to underprivileged networks of individuals suppressed by a social highcharts through the collective validity to create an advantage for the suppressed group expressing societal concern.[4] Online social movement structures have framework that works similar to embedded values within a system. The structure works as a support system to specific value set through promoting the need for an inherent social value within an information system and society or demoting the less desirable value.[4]

Black Bodies

Viral social media video clips circulating the internet displaying evidence of the racially injustice killings of African American individuals by authorities in American justice systems have increased significantly since the emergence of social media. News outlets and online social media platforms capitalizing on embedded values, such as justice and equality, distribute specific content to online users based on relevance. The embedded value of justice and equality act as the seen motivating factor for bringing attention to these stories in the online environment but these platforms are built from an economic standpoint to “commodify social life” also. In the work of Safiya Umoja Noble, an assistant professor at the University of Southern California, it is discussed that these media platforms use emotional value to create more traffic to gain users, and this is highly prevalent when evaluating online content pertaining to African American police killings. African American police killings increase traffic to these specific platforms and when the story becomes old news, there little done to address societal misdeeds of injustice in the social environments in which these events take place. The videos circulated are seldom used as solidified evidence in the court of law to convict wrongdoers and the value these viral videos have in changing corrupt systems only last as long as the news outlet endorses its presence online[9].

References

  1. 1.0 1.1 1.2 1.3 Brey, Philip. “Disclosive Computer Ethics.” ACM SIGCAS Computers and Society, vol. 30, no. 4, 2000, p. 10., doi:10.1145/572260.572264.
  2. 2.0 2.1 2.2 Friedman, Batya, and Helen Nissenbaum. “Bias in Computer Systems.” ACM Transactions on Information Systems, vol. 14, no. 3, 1996, pp. 330–347., doi:10.1145/230538.230561.
  3. Van Tyne, Sean. (May 29, 2016). "Value-Driven Design to Value-Centered Design". Retrieved April 28, 2019.
  4. 4.0 4.1 4.2 4.3 Bimber, B. (1998). "The Internet and Political Mobilization: Research Note on the 1996 Election Season". Social Science Computer Review. 16'(4): 391–401. Retrieved April 28, 2019.
  5. Trevisan, Filippo. “Social Engines and Social Science: A Revolution in the Making.” SSRN Electronic Journal, 2013, doi:10.2139/ssrn.2265348.
  6. Snow, Jackie (March 1, 2018). “Bias Already Exists in Search Engine Results, and It's Only Going to Get Worse”. MIT Technology Review. MIT Technology Review. Retrieved April 28, 2019.
  7. O’Neil, Cathy (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books. pp.22-30
  8. Goldman, E. (2008). “Search Engine Bias and the Demise of Search Engine Utopianism,” in Web Search: Multidisciplinary Perspectives, A. Spink and M. Zimmer (eds.), Berlin: Springer-Verlag, pp.42-43
  9. Noble, Safiya Umoja. “Critical Surveillance Literacy in Social Media: Interrogating Black Death and Dying Online.” Black Camera, vol. 9, no. 2, 2018, p. 147., doi:10.2979/blackcamera.9.2.10.