Difference between revisions of "Apple Watch"
Line 1: | Line 1: | ||
[[File:Algorithm.png|200px|thumb|right]] | [[File:Algorithm.png|200px|thumb|right]] | ||
{{Nav-Bar|Topics##}}<br> | {{Nav-Bar|Topics##}}<br> | ||
− | An '''Apple Watch''' is | + | An '''Apple Watch''' is a wearable smartwatch that allows users to perform a variety of tasks. In order to function, the watch needs to be paired with an iPhone 5 or later model. <ref> https://searchmobilecomputing.techtarget.com/definition/Apple-Watch </ref>. |
− | + | ||
Processes like cooking a meal or reading a manual to assemble a new piece of furniture are examples of algorithms in everyday life<ref>T.C. (August 29, 2017). [https://www.economist.com/the-economist-explains/2017/08/29/what-are-algorithms "What Are Algorithms?"] ''The Economist''. Retrieved April 28, 2019.</ref>. Algorithms are grounded in logic. The increase in their logical complexity via advancements in technology and human effort have provided the foundations for technological concepts such as artificial intelligence and machine learning<ref>McClelland, Calum. (December 4, 2017). [https://medium.com/iotforall/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning-3aa67bff5991 "The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning"]. ''Medium''. Retrieved April 28, 2019.</ref>. The influence of algorithms is pervasive and in computer science so it leads to an increase in ethical concerns in the areas of bias, privacy, and accountability. | Processes like cooking a meal or reading a manual to assemble a new piece of furniture are examples of algorithms in everyday life<ref>T.C. (August 29, 2017). [https://www.economist.com/the-economist-explains/2017/08/29/what-are-algorithms "What Are Algorithms?"] ''The Economist''. Retrieved April 28, 2019.</ref>. Algorithms are grounded in logic. The increase in their logical complexity via advancements in technology and human effort have provided the foundations for technological concepts such as artificial intelligence and machine learning<ref>McClelland, Calum. (December 4, 2017). [https://medium.com/iotforall/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning-3aa67bff5991 "The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning"]. ''Medium''. Retrieved April 28, 2019.</ref>. The influence of algorithms is pervasive and in computer science so it leads to an increase in ethical concerns in the areas of bias, privacy, and accountability. | ||
− | + | === Apple Watch Features === | |
+ | The Apple Watch offers a variety of features that allows the users to engage with the watch in their everyday lives. | ||
+ | |||
+ | == Fitness Tracking == | ||
+ | The Apple watch offers fitness tracking in the form of '''Activity Rings''' which is broken down into three colored rings. The first is the red '''Move''' ring which allows users to track how many calories they have burned by moving throughout a 24 hour period. The second is the green '''Exercise''' ring that allows users to track the minutes of brisk activity they have completed that day. Finally, the blue'''Stand''' ring allows users to track how many hours they have stood in a 24 hour period. <ref>https://learning-oreilly-com.proxy.lib.umich.edu/library/view/apple-watch-for/9781119658665/c08.xhtml#h2-6<ref> The user can define their activity goals for each type of ring, and the ring visually closes once the user achieves that goal. | ||
+ | [[File:appleWatchRings.png|500px]] | ||
+ | The visualization shown above demonstrates the different types of rings, as well as the users progress in completing the goals associated with each ring. | ||
== History == | == History == | ||
− | The | + | The Apple Watch was first released in 2015, with three distinct models; however, the genesis of the Apple Watch reportedly started with Jony Ive, Apple's chief designer, in 2011. <ref>https://appleinsider.com/inside/apple-watch</ref>. Since 2015, Apple has released multiple other models which provide different features to the user. |
− | + | ||
− | + | ||
− | + | ||
− | |||
=== Computation === | === Computation === |
Revision as of 17:07, 26 January 2022
An Apple Watch is a wearable smartwatch that allows users to perform a variety of tasks. In order to function, the watch needs to be paired with an iPhone 5 or later model. [1]. Processes like cooking a meal or reading a manual to assemble a new piece of furniture are examples of algorithms in everyday life[2]. Algorithms are grounded in logic. The increase in their logical complexity via advancements in technology and human effort have provided the foundations for technological concepts such as artificial intelligence and machine learning[3]. The influence of algorithms is pervasive and in computer science so it leads to an increase in ethical concerns in the areas of bias, privacy, and accountability.
Contents
- 1 Apple Watch Features
- 2 Fitness Tracking
- 3 Classifications
- 4 Ethical Considerations
- 5 Artificial Intelligence Algorithms
- 6 Ethical Dilemmas
- 7 See also
- 8 References
Apple Watch Features
The Apple Watch offers a variety of features that allows the users to engage with the watch in their everyday lives.
Fitness Tracking
The Apple watch offers fitness tracking in the form of Activity Rings which is broken down into three colored rings. The first is the red Move ring which allows users to track how many calories they have burned by moving throughout a 24 hour period. The second is the green Exercise ring that allows users to track the minutes of brisk activity they have completed that day. Finally, the blueStand ring allows users to track how many hours they have stood in a 24 hour period. Cite error: Closing </ref>
missing for <ref>
tag. Since 2015, Apple has released multiple other models which provide different features to the user.
Computation
Another cornerstone for algorithms comes from Alan Turing and his contributions to cognitive and computer science. Turing conceptualized the concept of cognition and designed ways to emulate human cognition with machines. This process turned the human thought process into mathematical algorithms and it led to the development of Turing Machines. It capitalized on these theoretical algorithms to perform unique functions and the development of computers. As their name suggests, computers utilized specific rules or algorithms to compute and it is these machines (or sometimes people)[4] that most often relate to the concept of algorithms that is used today. With the advent of mechanical computers, the computer science field paved the way for algorithms to run the world as they do now by calculating and controlling an immense quantity of facets of daily life. To this day, Turing machines are a main area of study in the theory of computation.
Advancements In Algorithms
In the years following Alan Turing’s contributions, computer algorithms increased in magnitude and complexity. Advanced algorithms, such as artificial intelligence is defined as utilizing machine learning capabilities.[5] This level of algorithmic improvement provided the foundation for more technological advancement.
The machine learning process shown above describes how machine learning algorithms can provide more features and functionality to artificial intelligence.
Classifications
There are many different classifications of algorithms, some are more well-suited for particular families of computational problems than others. In many cases, the algorithm one chooses to make for a given problem will have tradeoffs dealing with time complexity and memory usage.
Apple Watch Series 1
A recursive algorithm is an algorithm that calls itself with decreasing values in order to reach a pre-defined base case solution. The base case solution determines the values that are sent back up the recursive stack to determine the final outcome of the algorithm. It follows the principle of solving subproblems to solve the larger problem since once the base case solution is reached, the algorithm works upwards to fit the solution into the larger subproblem. The base case must be present, otherwise the recursive function will never stop calling itself, creating an infinite loop. Since recursion involves numerous function calls, it is one of the main sources of stack overflow. With recursive calls, programs have to save more stacks despite a lack of available space. Further, some recursive functions require additional computations even after the recursive call, adding to the consumption speed and memory. 'Tail Recursive' functions are an efficient solution to this, wherein recursive calls happen at the very end of the function, allowing only one stack to be saved throughout the function calls.
Due to the recurring memory stack frames that are created with each call, recursive algorithms generally require more memory and computation power. However, they are still viewed as simplistic and succinct ways to write elaborate algorithms.
Apple Watch Series 2
A [2] Words
Apple Watch Series 3
[3] However, they vary in that they are capable of using a multiple-valued function whose values are the positive integers less than or equal to itself. In addition, all points of termination are labeled as successes or failures. The terminology "non-deterministic" does not imply randomness, of rather a kind of free will[6]
Apple Watch Series 4
Words
Apple Watch Series 5
A brute force algorithm is the most "naive" approach one can take in attempting to solve a particular problem. A solution is reached by searching through every single possible outcome before arriving at an answer. In terms of complexity or Big-O notation, brute force algorithms typically represent the highest order complexity compared to other potential solutions for a given problem. While brute force algorithms may not be considered the most efficient option for solving computational problems, they do offer reliability as well as a guarantee that a solution to a given problem will eventually be found.
An example of a brute force algorithm would be trying all combinations of a 4-digit passcode, in order to crack into a target's smartphone.
Apple Watch SE
A divide and conquer algorithm divides a problem into smaller sub-problems then conquer each smaller problems before merging them together to solve the original problem. In terms of efficiency and the Big-O notation, the Divide and Conquer fares better than Brute Force but is still relatively inefficient compared to other more complex algorithms. An example of divide and conquer is merge sort wherein a list is split into smaller sorted lists and then merged together to sort the original list.
Examples of Divide and Conquer algorithms would be the sorting algorithm Merge Sort, and the searching algorithm Binary Search[7].
Apple Watch 6
Dynamic programming takes advantage of overlapping subproblems to more efficiently solve a larger computational problem. The algorithm first solves less complex subproblems and stores their solutions in memory. Then more complex problems will find these solutions using some method of lookup to find the solution and implement it in the more complex problem to find its own solution. The method of lookup enables solutions to be computed once and used multiple times. This method reduces the time complexity from exponential to polynomial.
An example of a common problem that can be solved by Dynamic Programming is the 0-1 Knapsack Problem.
Apple Watch 7
A backtracking algorithm is similar to brute force with the exception that as soon as it reaches a node where a solution could never be reached from said node on, it prunes all the subsequent node and backtracks to the closest node that has the possibility to be right. Pruning in this context means neglecting the failed branch as a potential solution branch in all further searches, reducing the scope of the possible solution set and eventually guiding the program to the right outcome.
An example of some problems that can be solved by algorithms that take advantage of backtracking is solving Sudoko, or the N-Queens Problem.
Ethical Considerations
Measuring the efficiency of an algorithm is standardized by checking how well it grows with more inputs. Computer scientists calculate how much computational time increases with an increasing number of inputs. Since this form of measurement merely intends to see how well an algorithm grows, the constants are left out since with high inputs these constants are negligible anyways. Big-O notation specifically describes the worst-case scenario and measures the time or space the algorithm uses[8]. Big-O notation can be broken down into order of growth algorithms such as O(1), O(N), and O(log N), with each notation representing different orders of growth. The later, log algorithms -commonly referred to as logarithms, are bit more complex than the rest, log algorithms take a median from a data set and compare it to a target value, the algorithm continues to halve the data as long as the median is higher or lower than the target value[8]. An algorithm with a higher Big-O is less efficient at large scales, for example in general a O(N) algorithm will run slower than a O(1) algorithm, and this difference will be more and more apparent, the larger the number of inputs.
Artificial Intelligence Algorithms
Clustering
Clustering is a Machine Learning technique in which, data is segregated into groups called clusters through an algorithm, given a set of data points. These clustering algorithms classify the data based on various criteria, but the fundamental premise is that data points with similarities will belong in the same group, that must be dissimilar to other groups. There are numerous clustering algorithms including K-Means clustering, Mean-Shift Clustering, DBSCAN (Density-Based Spatial Clustering of Applications with Noise) Clustering, EM (Expectation Maximization) Clustering, Agglomerative Hierarchical Clustering. [9]
K-Means Clustering
K-means clustering is the most widely known and used algorithm out of all the clustering algorithms. It involves pre-determining a target number - k, which represents the number of centroids needed in the dataset. A centroid refers to the predicted center of the cluster. It then identifies the data points nearest to the center to form each cluster, while keeping k as small as possible. [10] K-Means clustering is considered to be a fast algorithm, due to the minimal computations it requires. It has a Big-O complexity of O(n). [9]
Mean-Shift Clustering
Mean-Shift Clustering, also known as Mode-Seeking, is an algorithm where datapoints are grouped into clusters, through the process of iteratively shifting all the points towards their mode. The mode of a dataset is defined as the most occurring value in that particular dataset, or in graphical terms, the point where the density of data points is the highest. Therefore, the algorithm moves, or "shifts" each point closer to its closest centroid, the direction of which is determined by the density of the nearby points. Therefore, each iteration of the program moves each point closer to where all the other points are, eventually forming a cluster center. The key difference between Mean-Shift and K-Means clustering is that K-Means requires the number k to be set beforehand, whereas the Mean-Shift algorithm creates clusters on the go without necessarily determining how many will be formed. [11] Usually, the Big-O complexity of such an algorithm is O(Tn^2), where T refers to the number of iterations in the algorithm. [12]
DBSCAN Clustering
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is an algorithm that groups together nearby datapoints based on a measure of distance, often Euclidean distance and a minimal number of points. The algorithm also differentiates outliers if they are in low density areas. The algorithm requires two parameters - eps and minPoints. The decision on what to set these parameters as depends from dataset to dataset, and requires a fundamental understanding of the context of the dataset being used. The eps should be picked based on the dataset distance and while generally small eps values are desirable, if the set value is too small there is a danger that a portion of the data will go unclustered. Conversely, if the value set is too large, too many of the points might get grouped into the same cluster. The minPoints parameter is usually derived from the parameter D, following that minPoints ≥ D + 1 where D measures the number of dimensions in the data. [13] The average run-time complexity of a DBSCAN algorithm is O(n log n) whereas it's worst-case complexity can be O(n^2).
EM Clustering
Expectation Maximization or EM Clustering is similar to the K-Means clustering technique. The EM Clustering method takes forward the basic principles of K-Means Clustering in two primary ways: 1) The EM Clustering method aims to calculate what datapoints belong in what cluster through one or more probability distributions, instead of trying to simply calculate and maximize the difference in mean points. 2) The overall purpose of the algorithm is to maximize the chance or the likelihood of belonging to a cluster in the data.
Essentially, the EM clustering method approximates the distribution of each point based on different probability distributions and at the end of it, each observation has a certain level of probability of belonging to a particular cluster. Ultimately, the resulting clusters are analyzed based on which clusters have the highest classification probabilities. [14]
Agglomerative Hierarchical Clustering
Also known as AGNES (Agglomerative Nesting), the Agglomerative Hierarchical Clustering Technique also creates clusters based on similarity. To start, this method takes each object and treats it as if it were a cluster. It then merges each of these clusters in pairs, until one huge cluster consisting of all the individual clusters has been formed. The result is represented in the form of a tree, which is called a dendrogram. The manner in which the algorithm works is called the "bottom-up" technique. Each data entry is considered an individual element or a leaf node. At each following stage, the element is joined with its closest or most-similar element to form a bigger element or parent node. The process is repeated over and over until the root node is formed, with all of the subsequent nodes under it.
The opposite of this technique is through the "top-down" method, which is implemented in an algorithm called "Divisive Clustering". This method starts at the root node, and at each iteration nodes are split or "divided" into two separate nodes, based off the ranking of dissimilarity within the clusters. This is done until every node has been divided, leaving individual clusters or leaf nodes. [15]
Deep Learning and Neural Networks
Neural networks are a collection of algorithms that utilize many of the concepts mentioned above while taking their capabilities a step further through deep learning. On a high level, the purpose of a neural network is to interpret raw input data through a machine perception and return patterns in the data, through techniques above such as K-means clustering or Random Forests. To do so, a neural network requires datasets to train on and thus models its interpretations on the training sets in a machine learning process. Where neural networks differ is its ability to be “stacked” to engage in deep learning. Each process is held in nodes that can be likened to neurons in a human brain. When data is encountered, many separate computations occur that can be weighted to produce the desired output. How many “layers”, or the depth, of a neural network increases its capabilities and complexity multiplicatively. [16]
Ethical Dilemmas
With the relevance of algorithms as well as their sheer magnitude, ethical dilemmas were bound to arise. Potential ethical issues related to algorithms and computer science include issues of privacy, data gathering, and bias.
Bias
Given that people are the creators of algorithms, code can inherit bias from its coder or its initial source data.
Joy Buolamwini and Facial Recognition
Joy Buolamwini, a graduate computer science student at MIT, experienced a case of this. The facial recognition software she was working on failed to detect her face, as she had a skin tone that had not been accounted for in the facial recognition algorithm. This is because the software had used machine learning with a dataset that was not diverse enough, and as a result, the algorithm failed to recognize her.[17] Safiya Noble discusses instances of algorithmic search engines reinforcing racism in her book, "Algorithms of Oppression".[18] Bias like this occurs in countless algorithms, be it through insufficient machine learning data sets, or the algorithm developers own fault, among other reasons, and it has the potential to cause legitimate problems even outside the realm of ethics.
Bias in Criminalization
COMPAS is algorithm written to determine whether a criminal is likely to re-offend using information like age, gender, and previously committed crimes. Tests have found it to be more likely to incorrectly evaluate black people than white people because it has learned on historical criminal data, which has been influenced by our biased policing practices.[19]
Jerry Kaplan is a research affiliate at Stanford University’s Center on Democracy, Development and the Rule of Law at the Freeman Spogli Institute for International Studies, where he teaches “Social and Economic Impact of Artificial Intelligence.” According to Kaplan, algorithmic bias can even influence whether or not a person is sent to jail. A 2016 study conducted by ProPublica indicated that software designed to predict the likelihood an arrestee will re-offend incorrectly flagged black defendants twice as frequently as white defendants in a decision-support system widely used by judges. Ideally, predictive systems should be wholly impartial and therefore be agnostic to skin color. However, surprisingly, the program cannot give black and white defendants who are otherwise identical the same risk score, and simultaneously match the actual recidivism rates for these two groups. This is because black defendants are re-arrested at higher rates that their white counterparts (52% versus 39%), at least in part due to racial profiling, inequities in enforcement, and harsher treatment of black people within the justice system. [20]
Job Applicants
Many companies employ complex algorithms in order to review and sift the thousands of resumes they will receive each year. Sometimes these algorithms display a bias which can result in people with a specific racial background or gender being recommended over others. An example of this was an Amazon AI algorithm that preferred men over women in recommending people for an interview. The algorithm employed machine learning techniques and over time taught itself to prefer men over women due to a variety of factors [21]. A major problem facing machine learning algorithms is the unpredictability in their models and what they will begin to teach themselves. It was apparent Amazon engineers did not intend for their algorithm to be bias towards men but an error resulted in this happening. Additionally, this was not a quick fix as the algorithm had been in place for years and began to pick up this bias and after analysis after a period time, it was discovered.
Privacy And Data Gathering
The ethical issue of privacy is also highly relevant to the concept of algorithms. Information transparency [22] is an import point regarding these issues. In popular social media algorithms, user information is often probed without the knowledge of the individual, and this can lead to problems. It is often not transparent enough how these algorithms receive user data, resulting in often incorrect information which can affect both how a person is treated within social media, as well as how outside agents could view these individuals given false data. Algorithms can also often infringe on a user’s feelings of privacy, as data can be collected that a person would prefer to be private. Data brokers are in the business of collecting peoples in formation and selling it to anyone for a profit, like data brokers companies often have their own collection of data. In 2013, Yahoo was hacked, leading to the leak of data pertaining to approximately three billion users.[23] The information leaked contained data relating to usernames, passwords, as well as dates of birth. Privacy and data gathering are common ethical dilemmas relating to algorithms and are often not considered thoroughly enough by algorithm’s users.
The Filter Bubble
Algorithms can be used to filter results in order to prioritize items that the user might be interested in. On some platforms, like Amazon, people can find this filtering useful because of the useful shopping recommendations the algorithm can provide. However, in other scenarios, this algorithmic filtering can become a problem. For example, Facebook has an algorithm that re-orders the user's news feed. For a period of time, the technology company prioritized sponsored posts in their algorithm. This often prioritized news articles, but there was no certainty on whether these articles came from a reliable source, simply the fact that they were sponsored. Facebook also uses its technology to gather information about its users, like which political party they belong to. This combined with prioritizing news can create a Facebook feed filled with only one party's perspective. This phenomenon is called the filter bubble, which essentially creates a platform centered completely around its user's interests.
Many, like Eli Pariser, have questioned the ethical implications of the filter bubble. Pariser believes that filter bubbles are a problem because they prevent users from seeing perspectives that might challenge their own. Even worse, Pariser emphasizes that this filter bubble is invisible, meaning that the people in it do not realize that they are in it. [24] This creates a huge lack of awareness in the world, allowing people to stand by often uninformed opinions and creating separation, instead of collaboration, with users who have different beliefs. Because of the issues Pariser outlined, Facebook decided to change their algorithm in order to prioritize posts from friends and family, in hopes of eliminating the effects of the potential filter bubble.
Filter Bubble in Politics
Another issue that these Filter Bubbles create are echo chambers; Facebook, in particular, filters out [political] content that one might disagree with, or simply not enjoy [25]. The more a user "likes" a particular type of content, similar content will continue to appear, and perhaps content that is even more extreme. This was clearly seen in the 2016 election when without realizing it, voters developed tunnel vision. Rarely did their Facebook comfort zones expose them to opposing views, and as a result they eventually became victims to their own biases and the biases embedded within the algorithms.[26] Later studies produced visualizations to show how insular the country was at the time of the election on social media and the large divide between the two echo chambers with almost no ties to each other.[27]
Corrupt Personalization
Algorithms have the potential to become dangerous, with their most serious repercussions being the threat to democracy that is extensive personalization. Algorithms such as Facebook's are corrupt in the practice of "like recycling" that they partake in. In Christian Sandvig's article title Corrupt Personalization, he notes that Facebook has defined a "like" in two ways that the users do not realize. The first is that "anyone who clicks on a "like" button is considered to have "liked" all future content from that source," and the second is that "anyone who "likes" a comment on a shared link is considered to "like" wherever that link points to" [28]. As a result, posts that you "like" wind up becoming ads on your friends' pages claiming that you like a certain item or thing. You are not able to see these posts and, because they do not appear on your news feed, you do not have the power to delete them. This becomes a threat to one's autonomy, for even if they wanted to delete this post, they can not. Furthermore, everyone is entitled to the ability to manage the public presentation of their own self-identity, and in this corrupt personalization, Facebook is giving users new aspects of their identity that may or may not be accurate.
Agency And Accountability
Algorithms make "decisions" based on the steps they were designed to follow and the input they received. This can often lead to algorithms as autonomous agents[29], taking decision making responsibilities out of the hands of real people. Useful in terms of efficiency, these autonomous agents are capable of making decisions in a greater frequency than humans. Efficiency is what materializes the baseline for algorithm use in the first place.
From an ethical standpoint, this type of agency raises many complications, specifically regarding accountability. It is no secret that many aspects of life are run by algorithms. Even events like applying to jobs are often drastically effected by these processes. Information like age, race, status, along with other qualifications, are all fed to algorithms, which then take agency and decide who moves further along in the hiring process and who is left behind. Disregarding inherent biases in this specific scenario, this process still serves to reduce the input of real humans and decrease the number of decisions that they have to make, and what is left over is the fact that autonomous agents are making systematic decisions that have extraordinary impact on people's lives. While the results of the previous example may only culminate to the occasional disregard of a qualified applicant or resentful feelings, this same principle can be much more influential.
The Trolley Problem in Practice
Consider autonomous vehicles, or self-driving cars, for instance. These are highly advanced algorithms that are programmed to make split second decisions with the greatest possible accuracy. In the case of the well-known "Trolley Problem"[30], these agents are forced to make a decision jeopardizing one party or another. This decision can easily conclude in the injury or even death of individuals, all at the discretion of a mere program.
The issue of accountability is then raised, in a situation such as this, because in the eyes of the law, society, and ethical observers, there must be someone held responsible. Attempting to prosecute a program would not be feasible in a legal situation, due to not being able to have a physical representation of the program, like you would a person. However, there are those such as Frances Grodzinsky and Kirsten Martin [31], who believe that the designers of an artificial agent, should be responsible for the actions of the program. [32] Others contend this point by saying that the blame should be attributed to the users or persons directly involved in the situation.
These complications will continue to arise, especially as algorithms continue to make autonomous decisions at grander scales and rates. Determining responsibility for the decisions these agents make will continue to be a vexing process, and will no doubt shape in some form many of the advanced algorithms that will be developed in the coming years.
Intentions and Consequences
The ethical consequences that are common in algorithm implementations can be either deliberate or unintentional. Instances where an algorithm's intent and outcome differs is noted below.
YouTube Radicalization
Scholar and technosociologist Zeynep Tufekci has claimed that "YouTube may be one of the most powerful radicalizing instruments of the 21st century."[33] As YouTube algorithms aim maximize the amount of time that viewers spend watching, it inevitably discovered that the best way to do this was to show videos that slowly "up the stakes" of the subject being watched - from jogging to ultramarathons, from vegetarianism to veganism, from Trump speeches to white supremacist rants.[33] Thus, while the intention of Youtube is to keep viewers watching (and bring in advertisement money), they have unintentionally created a site that shows viewers more and more extreme content - contributing to radicalization. Such activity circles back to and produce filters and echo chambers.
Facebook Advertising
By taking a closer look at Facebook's algorithm that serves up ads to its users, gender and racial bias is obviously prominent. Using demographic and racial background as factors, Facebook's decides which ads are served up to its users. A team from Northeastern tested to see the algorithm bias in action and by running identical ads with slight tweaks to budget, images, and headings, the ads reached vastly different audiences. Results such as minorities receiving a higher percentage of low-cost housing ads, and women receiving more ads for secretary and nursing jobs [34]. Although the intent of Facebook may be to reach people that these ads are intended for, the companies that are signing up to advertise with Facebook have stated they did not anticipate this type of filtering when paying for their services. Additionally, although Facebook may believe it is win-win for everyone since advertisers will be getting more interactions with certain audiences targeted, and people will be happy to see ads more relatable to them shown, it is incredibly discriminatory to target people based on factors that are uncontrollable. Facebook needs to adjust its targeted advertising practices by removing racial and gender factors in their algorithms in order to prevent perpetuating stereotypes and placing people in certain boxes by race and gender. This type of algorithm goes against many ethical principles and is important that powerful technology companies are not setting poor examples for others.
See also
References
- ↑ https://searchmobilecomputing.techtarget.com/definition/Apple-Watch
- ↑ T.C. (August 29, 2017). "What Are Algorithms?" The Economist. Retrieved April 28, 2019.
- ↑ McClelland, Calum. (December 4, 2017). "The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning". Medium. Retrieved April 28, 2019.
- ↑ [crgis.ndc.nasa.gov/historic/Human_Computers "Human Computers"]. NASA Cultural Resources. Retrieved April 28, 2019.
- ↑ Anyoha, Rockwell (August 28, 2017). "The History of Artificial Intelligence". Science in the News. Harvard. Retrieved April 28, 2019.
- ↑ " Floyd, Robert W. (November 1996) Non-Deterministic Algorithms. Carnegie Institute of Technology. pp. 1–17."
- ↑ "Divide and Conquer Algorithms". Geeks for Geeks. Retrieved April 28, 2019.
- ↑ 8.0 8.1 [ Bell, Rob. "A Beginner's Guide to Big O Notation". Retrieved April 28, 2019.
- ↑ 9.0 9.1 Seif, George (February 5, 2018). "The 5 Clustering Algorithms Data Scientists Need To Know". Retrieved April 28, 2019.
- ↑ Garbade, Michael J. (September 12, 2018). "Understanding K-Means Clustering in Machine Learning". Towards Data Science. Retrieved April 28, 2019.
- ↑ "Meanshift Algorithm for the Rest of Us (Python)", May 14, 2016. Retrieved April 28, 2019.
- ↑ Thirumuruganathan, Saravanan (April 1, 2010). "Introduction To Mean Shift Algorithm". Retrieved April 28, 2019.
- ↑ Salton do Prado, Kelvin (April 1, 2017). "How DBSCAN works and why should we use it?". Towards Data Science. Retrieved April 28, 2019.
- ↑ "Expectation Maximization Clustering" RapidMiner Documentation. Retrieved April 28, 2019.
- ↑ "HIERARCHICAL CLUSTERING IN R: THE ESSENTIALS/Agglomerative Hierarchical Clustering". DataNovia. Retrieved April 28, 2019.
- ↑ A Beginner's Guide to Neural Networks and Deep Learning. (n.d.). Retrieved April 27, 2019, from https://skymind.ai/wiki/neural-network
- ↑ Buolamwini, Joy. [www.media.mit.edu/posts/how-i-m-fighting-bias-in-algorithms/ "How I'm Fighting Bias in Algorithms"]. MIT Media Lab. Retrieved April 28, 2019.
- ↑ Noble, Safiya. Algorithms of Oppression.
- ↑ Larson, Mattu; Kirchner, Angwin (May 23, 2016). "How We Analyzed the COMPAS Recidivism Algorithm". Propublica. Retrieved April 28, 2019.
- ↑ Kaplan, J. (December 17, 2018). "Why your AI might be racist". Washington Post. Retrieved April 28, 2019.
- ↑ Dastin, Jeffrey. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women.” Reuters, Thomson Reuters, 9 Oct. 2018, af.reuters.com/.
- ↑ Turilli, Matteo, and Luciano Floridi (2009). "The Ethics of Information Transparency." Ethics and Information Technology. 11(2): 105-112. doi:10.1007/s10676-009-9187-9.
- ↑ Griffin, Andrew (October 4, 2017) "Yahoo Admits It Accidentally Leaked the Personal Details of Half the People on Earth." The Independent. Retrieved April 28, 2019.
- ↑ Pariser, Eli. (2012). The Filter Bubble. Penguin Books.
- ↑ Knight, Megan (November 30, 2018). "Explainer: How Facebook Has Become the World's Largest Echo Chamber". The Conversation. Retrieved April 28, 2019.
- ↑ El-Bermawy, Mostafa M. (June 3, 2017). "Your Filter Bubble Is Destroying Democracy". Wired. Retrieved April 28, 2019.
- ↑ MIS2: Misinformation and Misbehavior Mining on the Web - Scientific Figure on ResearchGate. Available from: https://www.researchgate.net/figure/Social-media-platforms-can-produce-echo-chambers-which-lead-to-polarization-and-can_fig4_322971747 [accessed 23 Apr, 2019]
- ↑ Sandvig, Christian. “Corrupt Personalization.” Social Media Collective, 27 June 2014, socialmediacollective.org/2014/06/26/corrupt-personalization/
- ↑ “Autonomous Agent.” Autonomous Agent - an Overview | ScienceDirect Topics, www.sciencedirect.com/topics/computer-science/autonomous-agent.
- ↑ Roff, Heather M. “The Folly of Trolleys: Ethical Challenges and Autonomous Vehicles.” Brookings, Brookings, 17 Dec. 2018, www.brookings.edu/research/the-folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/.
- ↑ Martin, Kirsten. “Ethical Implications and Accountability of Algorithms.” SpringerLink, Springer Netherlands, 7 June 2018, link.springer.com/article/10.1007/s10551-018-3921-3.
- ↑ "The ethics of designing artificial agents" by Frances S. Grodzinsky et al, Springer, 2008.
- ↑ 33.0 33.1 [1] Tufekci, Zeynep. “YouTube, the Great Radicalizer.” The New York Times, The New York Times, 10 Mar. 2018, www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html.
- ↑ Hao, Karen. “Facebook's Ad-Serving Algorithm Discriminate by Gender and Race.” MIT Technology Review, 5 Apr. 2019, www.technologyreview.com/.