User:Kevjiang

From SI410
Jump to: navigation, search

David W. Shoemaker argued that even though there are numerous properties that form us, only a small subset of those properties constitute our core self-identity (1). Following that train of thought, this assignment can almost be seen as a journey of self discovery, one which I embark on to see whether the information about me on the web actually capture those core properties that make me who I am. The result is interesting. On one hand, I don’t feel disturbed by the amount of information that exist about me, but on the other hand, I do find it troubling when the information that ICTs collect about me began to influence my self identity sometimes in ways that I don’t desire, or that they began to obstruct my ability to manage my own public identity online.

As humans, we don’t like to be controlled. We can see evidences of that in the beginning of time when Adam and Eve disobeyed God to the number of uprisings throughout human history that fights for freedom. The same holds true when it comes to the development of our own identity. That’s why I resonate deeply with Floridi when he presented the argument that having privacy isn’t just about the fact that we have information we don’t want others to have access to, but it’s also about having a secluded space for us to develop independently without outside influences (2). Hence it brings me to my first point. ICTs’ capability to collect our data and analyze our behavior patterns and the mechanism that allows that information to be shared with other parties pose the ethical question of intrusion into that “private development space” we ought to have. This is important not only because ICTs are taking away the self reflective process that I should be doing myself about my behaviors, but also because what other parties may do with the information they have about me is biased in a way that is most advantageous for them. They may want to reinforce a certain behavior pattern that I have in order to sell me a product or cause me to agree to an idea even when it may not be in my best interest to do so.

I hope to illustrate my point through both a positive and a negative example that I encountered as I was searching for my self identity online.

As I was examining the data that Facebook collected about me, one folder of such information contains a list of advertisements that Facebook believes I am interested in based on my activities on the platform. It doesn’t surprise me that a large portion of those advertisements point to my passion towards food. However, after reflection, I realized that my interest towards food is also strengthened by the advertisements on particular products and services that Facebook recommends to me. One particular instance is when I was “accidentally” drawn to a video advertisement about Gordon Ramsay’s online master class where he teaches cooking techniques in a series of video lessons. I reacted to that post as if I found a secret treasure and commented “Christmas came early” before eagerly entered my credit card information to enroll in that course. After that, it’s not surprising that I received more advertisements related to food or culinary arts. In fact, Facebook continues to give me recommendations on cooking classes offered by other famous chefs, and thus the cycle continues.

Ultimately this example illustrated my point in a positive way because even though I don’t like the idea of being “manipulated”, I am ultimately glad that through the data Facebook collects about me, it is able to reinforce a part of my identity that I value and want to develop further. However, it becomes a bigger ethical problem when my interest is at odds with that of ICTs and they attempt to reinforce some of my behavior patterns that I actually want to change.

To illustrate this point, I am going to use the hypothetical example of YouTube because I wasn’t able to download the data they have about me. But I won’t be surprised if the user profile YouTube generates about me looks something like this: “User is an Asian male attending the University of Michigan, currently a student, frequently interacts with video contents ranging from movie clips, anime clips, cooking shows, and movie romance scenes.” YouTube is then able to use that profile for the algorithm they use to recommend more video contents to my homepage to keep me coming back to the site.

Now as someone who struggles with pornographic addiction and wishes to get rid of that property that currently roots itself in me, I should avoid any video contents that may potentially stimulate that temptation. But in this case, YouTube violates the ethical boundary by exposing me to unwanted data, and stands in the way of my personal desire of improving my identity by getting rid of patterns of behaviors that stop me from becoming my ideal self.

In addition to affecting the development of my identity internally, I am also troubled when the information ICTs collect about me form an inaccurate portrayal of me as a person. This points to Shoemaker’s argument on how ICTs’ data collection process violates our autonomy to present our self-identities to others in the manner that we see fit (3).

When I began my search process on Google, initially it led me to several data broker websites that all presents slight variations in the kind of information they have about me. But typically, they consist of my name, my current address on-campus, my permanent home address, and profiles on some of my close family relatives. It was an interesting experience to see online profiles about myself in that I almost felt a sense of pride, thinking that I am worthy for someone to collect information and build a profile about me. It reflects Floridi’s argument that Generation X, Y, and Zs perceive privacy through a different lens than previous generations. We generally view the internet as a “semi-public” space and we make decisions on the base assumption that other individuals are able to see our behaviors (4).

What surprised me however is the amount of erroneous information I found about myself behind pay walls. I was able to retrieve one such report from InstantCheckMate where not only does the profile incorrectly attached middle initials that I never knew I had, it also has a string of academic and work experiences that simply doesn’t make sense. The little information it does have about me are often outdated, and lastly, it also includes pictures of “me” that I don’t want to associate myself with (no offense to the actual individuals who are being featured in those photos).

It’s not that I want to only present an overly polished version of myself, but I just want myself to be presented accurately. I don’t know who will want to see my report and I don’t know what kind of biases they may have when they read it. But if I am going to be judged, at least I want to be judged by my true self, not the inaccurate experiences and pictures of other Kevins that this data broker has associated me with.

This is especially troublesome considering the lack of restriction on what people can do with these information as long as they are willing to pay. For example, as I retrieve the report about myself, I have to agree to the condition that I won’t be using this service or the information it provides to make a number of decisions including decisions about employment. However, despite having to check that box, there are no other mechanisms to ensure that users will actually stay true to their promise besides their conscience. Many employers nowadays can choose not to disclose the specific reasons to why they reject a certain candidate by stating that doing so would violate their own privacy policies – yeah I am talking about you Google. Which means as candidates, we will never know if they unethically used information they weren’t suppose to use while making those decisions.

I know the risk of that actually happening Is small, but in today’s competitive job market, that misrepresentation is still a risk I don’t want to have.

In the end, this has been an enjoyable experience. Something that stood out to me when I was doing the course readings, especially the David Shoemaker piece, is the number of times the word “vague” and other conditional clauses are used to define informational privacy. It shows that there are still much debates on the theoretical foundations that many ethical arguments are grounded upon. Even though these authors are doing their best in trying to put the puzzle together, we as users are the final judges in determining to what degrees we feel our privacy and autonomy has been violated by ICTs.

References

[1] [2] [3]

[4]
  1. 1. Shoemaker, David W., 2009. "Self-exposure and exposure of the self: informational privacy and the presentation of identity", pg. 22
  2. 2. Floridi, Luciano. 2014. "Privacy: Informational Friction," in Floridi, The Fourth Revolution, pg. 119-120
  3. 3. Shoemaker, David W., 2009. "Self-exposure and exposure of the self: informational privacy and the presentation of identity", pg. 11
  4. 4. Floridi, Luciano. 2014. "Privacy: Informational Friction," in Floridi, The Fourth Revolution, pg. 106-107