Maggie O'Meara

From SI410
Revision as of 11:31, 19 February 2021 by Momeara (Talk | contribs) (Conclusion)

Jump to: navigation, search

Introduction

Every person I will ever meet has a different perspective of me. These people will have good and bad memories of me, forming their own idea of who I am. A stranger on the subway sees me as the girl who rushed onto the last cable car to make the 7am commute. A friend on my cheerleading team knows me as the girl who broke her wrist landing that tumbling pass wrong. My classmate thinks of me as the girl who could spit out mental math like a calculator. However, when one first reads the name, Maggie O’Meara, a full persona cannot be created in their mind; a name doesn’t pass you on the sidewalk. Yet, in our digital society, a name passes you by frequently. A recruiter will first see the name Maggie O’Meara in big bold letters at the top of my resume. They will then search my name, but all they will find is my only public data persona: @mags_omeara on instagram. The other four @mags_omeara’s are masked behind the privatized tool that applications allow. My data identity changes depending on the social media platform, creating different digital personas that evolve as I age. These personas alter the perception that both the data collector and other users have of me, subjectively skewing Big Data algorithms.

My Data Identities

Data Policies

My Average Activity, Your Activity Page

After analyzing this evolution of my different digital identities, I realized my lack of posting was a result of two major fears: recruiters seeing everything that encompasses my digital footprint, and companies taking advantage of user data. Both use algorithms to serve their purpose and both algorithms are unknown to the typical user - also known as the input data. Although the only physical manifestation I can see of recruiters’ algorithms are the results I get when I google myself, due to legal obligations, I can read the data privacy policies - and therefore the inputs of the algorithms - of social media platforms.

Every time a website or application is surfed, the user has agreed to the terms and conditions, creating a default opt-in system. If there is disagreement regarding the policy, there is no way to opt-out of data collection and still use the app. When a new user creates an account, the massive web begins. According to Instagram’s privacy policy, they collect all the user’s content, including the location of a photo shared, interactions with other users, hashtags, and more (https://help.instagram.com). This forms a large network of data that allows Instagram to form a profile that they assume parallels the users’ physical embodiment. When further exploring, I found the “Your Activity” page, which says that I average 50 minutes per day on Instagram. This adds up to 304 hours per year, and since I’ve been on the app for 7 years, that’s a total of over 89 days. While this is only a very small fraction of my entire life, 89 days creates a massive web of data on @mags_omeara. Instagram uses data to sell to advertisers and to give users recommendations for events, accounts to follow, and more, keeping the user engaged in the app. After looking at the other applications’ data policies, they all conform to the same motto: the more we collect, the better.

Because the data collected is specific to the content from that platform, each platform could use their large dataset to make in-app improvements. The issue arises when Big Data is sold to third parties. Although they will acquire massive data sets, the data set is biased due to the fact that it is incomplete. Crawford states that “A data set may have many millions of pieces of data, but this does not mean it is random or representative” (Crawford 8). When acquiring Instagram @mags_omeara, they will only get my lively side as opposed to only obtaining LinkedIn data, where they only see my professional side. A popular example of this is in the Cambridge Analytica Facebook scandal. This company obtained millions of users’ Facebook data to group people based on the OCEAN personality test in order to target them to vote a certain way politically (Resnick 1). If my Facebook data was sorted through this algorithm, @mags_omeara would score highly on agreeableness because I interact frequently with family, therefore assuming I have close family relationships. However, on Instagram, @mags_omeara would have a high score in extroversion, showing me a completely different targeted ad than on Facebook. Therefore, the data that Cambridge Analytica and many other third parties receive is biased—they are not getting the full picture of their users’ digital footprint, nor their users’ physical identity.

Conclusion

Just as the stranger on the subway sees me as the girl rushing to work, that interaction is only a fraction of my physical embodiment and cannot be used to create a complete understanding of my identity. While I am on 5 major platforms, each only contributes to ⅕ of my digital footprint, and an even smaller fraction of my whole identity. As a result of these different personas which are constantly evolving, the input data on me is exclusive to the specific application, skewing Big Data sets. Whether companies use this data to make a profit from other companies or use it to enhance their own applications, the data collected on each individual user is misrepresented because of its limited scope. Essentially, the user does not equal the person. If companies want to successfully use my data, they have to see me simply as a user and not an entire person within that user. @mags_omeara is playful, family oriented, artistic, professional, and a baker. But I am much more.