Privacy in the Online Environment

From SI410
Revision as of 15:54, 26 April 2019 by Mestei (Talk | contribs) (Doxxing)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Back • ↑Topics • ↑Categories

Privacy in the Online Environment (also referred to as information privacy and Internet privacy) refers to a user's rights and control over his personal information in the digital realm. It allows for the many tiers of privacy (expressed in three theories), while simultaneously letting the users freely express themselves in whichever manner they choose, without jeopardizing their real identities (refer to Anonymous Behavior in Virtual Environments). Privacy in the Online Environment is related to the fields of information privacy and anonymity in cyberspace.

The Complexity of Privacy in Virtual Environments

Because there are various types of environments in the web, we must distinguish the need for privacy in each kind of environment. For example, the amount of information one would give to other users when playing a massively multiplayer online role-playing game (MMORPG), such as World of Warcraft, is probably not the same as the content one might share with friends when playing Halo 3 or interacting on Facebook. The dynamics of privacy become increasingly complex as the number of users increases. If other users in the environment are exclusively friends, one may not want to hide their real name, phone number and home address. However, if the environment is occupied by millions of unknown people, an individual may not want to share as much of their personal information. One may choose to remain completely anonymous or could configure privacy settings to ensure greater protection over such information.

Presentation of Self

Information privacy has been directly linked to personal identity. According to David Shoemaker, "a threat to informational privacy is a threat to our personal identity [1]."

Dean Cocking (2008) describes the social psychological presentation of self as two-fold, active and passive. In offline environments, individuals vary between multiple selves circumstantially. However, at any given time, s/he is presenting a voluntarily-communicated active self (clothing, body piercings/tattoos, external behaviors, etc...) and an involuntary, more intimate passive self (one's personal history, vocal tone, facial contortions, etc...) In online environments, an active self is malleable. [2] A user can portray dishonest, inaccurate personal information through their avatars and profiles. [3]

Even though what Cocking (2008) means a passive-self online is ambiguous at best, limitations to Internet privacy and example cases, where one's privacy is threatened or violated, are easily imaginable.


Privacy is one area of Information Ethics where the function and implementation of value-laden ICTs can be considered in conjunction with Floridi’s Infosphere. Floridi's ontological model of privacy is a robust and sufficiently broad model of privacy that more effectively handles information privacy that others' also prominent and influential theories of privacy: the Control Theory, the Restricted Access Theory, and the combined Restricted Access/Limited Control Theory.

Ontological Model of Privacy

Luciano Floridi (2005) argues that one can infer the amount of privacy held by an individual as a measure of ontological friction, or the degree to which “forces oppose the information flow within (a region of) the infosphere” Consider a real-world example to describe offline ontological friction: Two people are on a road walking towards each other from far away, and they are trying to read what it says on the others’ shirt, but because they are too far away, they are unable to. What it says on their T-shirts is the presentation of an active self, the infosphere is the road, and the distance between the two people represents ontological friction. On the Internet, ontological friction could be described as the degree to which an agent is hindered from learning private information about a patient. The friction could refer to the patient’s use of privacy settings, the degree to which their different SMSs are disconnected, or the extent to which other elements of a patient’s online persona remain obscure [4]

Restricted Access Theory

Under the Restricted Access theory first introduced in 1983 by Armstrong & Bok, it is up to individuals to create zones (or acknowledge contexts) of privacy for themselves. Privacy is achieved when one creates a zone that prevents others from unacknowledged prying, and where one shines light only on desired characteristics of self to avoid or obscure access to other potentially passively conveyed attributes. The limitation of this theory is that it does not specify how the user's zone of privacy is constructed (i.e. the ability to grant or deny access to viewers). [5] [6]

Control Theory

Gavison’s (1980) Control Theory acknowledges that certain information will inevitably be interpreted and stipulates that individuals inherently own and actively control all information about self. Privacy exists only when the user has control over some domains of unrevealed personal information. This theory does not specify what constitutes as a zone of personal information, and why, and the extent of the user's control over said information. [7] [8]

Restricted Access/Limited Control Theory (RALC)

Developed by James Moor and Herman Tavani, RALC is a hybrid of the Restricted Access and Control theories. RALC acknowledges that some information is controlled and some are restricted, and the user maintains privacy via different means depending on the context of the situation. One important distinction is exemplified in the case of 'condition of privacy' versus 'the right to privacy'. The 'condition of privacy' states that if the user exposes his personal information, he has lost his privacy. In contrast, if the user did not choose to reveal certain information yet said information is used without permission, 'the right to privacy' has been violated. [9]

Protecting Privacy

How far will YOU go to protect your privacy?

There exists a dichotomy in the sharing of information, that is, publicly available information and partially available information. There has been an ongoing controversy stemming from the former case. Data Mining, the collection and use of publicly shared pieces of information (via databases, television, records, etc), has been alleged to be a blatant violation of one's information privacy. Specifically, the term "use" is vague but can extend to a company using personal information and observed behavioral patterns to develop a profitable algorithm and/or enterprise. This issue of data mining and violation of privacy comes into conflict with all three theories (i.e. Control, Limitation, RALC).

Helen Nissenbaum points out the unethical aspects of data mining:

  • data mined is collected in one context, and used in another context. For example, manipulation of information and negative portrayal of character.
  • certain collections or patterns of information may expose the identity of a person in an unexpected way

Users are therefore vulnerable to identity theft per se by merely cooperating in any kind of environment, especially virtual. One's real identity can be put at risk if the right amount of the correctly collected bits of information fall into the hands of an unauthorized viewer. Information revealed over the online environment lends itself to four self-identities: Self-determination, Narrative, Social, and Self-esteem. These four categories possess a subset of qualities that will contribute to the understanding of a person. For example, one's beliefs, emotional dispositions, religious values, etc.

Data Breach Notification

Currently, all fifty states have mandatory reporting laws requiring companies to inform consumers and give reparations if a data breach occurs. However, not all states publish the occurrence of these data breaches. As a result of this It is difficult to study the occurrence of data breaches across the US, leaving consumers uncertain about their frequency and risk.


Main Article: Doxxing

Doxxing is a form of breach of privacy online. This form of online harassment is when troll, also known as doxxers, find private or personal information about an individual and purposely release this information without the approval from the victim [10]. This contributes to the vulnerability that was previously mentioned as the user's real identity and much more information is released to the public for anyone to see, causing harm to the privacy and anonymity that in the online environment. Doxxing can lead to many unethical actions such as sexual or verbal harassment or threats to a person's life.

Self-Identities in an Online Environment


Created by Harry Frankfurt, states that psychological elements that govern one are only truly authorized by himself. 'Psychological elements' such as hope, theoretical beliefs, motivations, etc evidently lend to self-identity.


This identity was developed by Marya Schechtman and states that identity is conceived through one's decision to impart importance on certain psychological features and by giving said features an intelligible place in his life.

Social Identity

Developed by Charles Taylor and Anthony Appiah, this identity joins collective and personal properties of persons that are seen as important for social life (i.e. race, gender, ethnicity, religion, sexual orientation) [11].

Self-Esteem Identity

David Copp asserts that there are clear properties a part of self-identity if there are positive or negative emotions consistently associated with certain situations [12]. For example, an aspiring student will almost always feel disappointment when receiving a lower grade on an exam.

See Also


  1. Shoemaker, David. "Self-exposure and Exposure of the Self: Informational Privacy and the Presentation of Identity." Ethics and Information Technology 12.1 (2009): 3-15. SpringerLink. 3 Apr. 2009. Web. <>.
  2. Cocking, D. (2008). Plural selves and relational identity: Intimacy and privacy online. In Information technology and moral philosophy. J. van der Hoven & J Weckert. (pp. 123-141). Cambridge University Press.
  3. Ellison, N., Hancock, J., & Toma, C. (2012). Profile as a promise: A framework for conceptualizing veracity in online dating self-presentations. New Media and Society, 14(1). doi:10.1177/1461444811410395.
  4. Floridi, l. (2008). The ontological interpretation of informational privacy. Ethics and Information Technology, 10(2), 115-121.doi: 10.1007/s10676-006-0001-7.
  5. Armstrong, W. P., & Bok, S. (1983). Secrets: On the Ethics of Concealment and Revelation. Vintage: Reissue edition.
  6. Tavani, H., (2008) Informational privacy: Concepts, theories and controversies. In Himma, K., E., & Tavani, H, Handbook of information and computer ethics. John Wiley and Sons. Hoboken, NJ.
  7. Gavison, R. (1980). Privacy and the limits of law. The Yale Law Journal, 89(3), 421-471. doi:10.2307/795891.
  8. Tavani, H., (2008) Informational privacy: Concepts, theories and controversies. In Himma, K., E., & Tavani, H, Handbook of information and computer ethics. John Wiley and Sons. Hoboken, NJ.
  9. Tavani, H., (2008) Informational privacy: Concepts, theories, and controversies. In Himma, K., E., & Tavani, H, Handbook of information and computer ethics. John Wiley and Sons. Hoboken, NJ.
  10. Stein, Joel. “How Trolls Are Ruining the Internet.” Time, 18 Aug. 2016.
  11. "Charles Taylor (philosopher)." Wikipedia. N.p., n.d. Web.
  12. Copp, David (2005). The normativity of self-grounded reason. Social Philosophy and Policy 22 (2):165-203.<>
Back • ↑Top of Page