Racial Algorithmic Bias

From SI410
Revision as of 10:54, 16 April 2021 by Sabinaa (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This article will outline three key aspects of the topic racial algorithmic bias. First, the article will give information about the description of racial algorithmic bias in machines and will briefly touch on the possible reasons for the presence of racial algorithmic bias. Next, the article will outline instances of racial algorithmic bias present in the machines that are utilized by top performing industries today. And lastly, the article will define possible ethical issues that give birth to racial algorithmic bias and the ethical implications of racial algorithmic bias.

Racial Algorithmic Bias

Back • ↑Topics • ↑Categories

R

acial algorithmic bias refers to errors present in algorithms that skew results and create unfair advantages and partial results for certain racial groups. To begin, machines learn through the use of algorithms, which are a set of computer implementable instructions used to solve a problem. Racial algorithmic bias can stem from errors in either the design of the machine or algorithm or sampling errors in the data that is used for machine learning to develop a computer algorithm.

Human/Creator Bias: One of the plausible reasons that racial algorithmic bias arises is because machines are subject to the minds of their creators, human beings, and learn through real time data that may be erroneous or partial. Because of our historic roots that have paved the way to racial bias in our real world, real world data that is being used to create algorithms can be skewed, and thus may create a biased algorithm. All in all, human minds are subjective, and therefore, racial bias remains an issue that we have faced for decades. However, many algorithms that dictate important decisions for millions of communities may enshroud racial biases and therefore can unfairly decide on monumental decisions for many.

Training Data Bias: With human/creator bias, another linked plausible reason for racial algorithmic bias is training data bias, which is the human bias that comes with the data a certain algorithm uses to train itself to make a decision. This indirect form of racial bias happens when variables such as gender, race, or sexual orientation are removed, however, the data used to train the algorithm in forming trends and patterns is skewed.[1] Examples of training data bias is most prevalent in predictive policing, gathering immense data based on location and prevalent information regarding individuals including gender, martial status, substance abuse and criminal records.[2] Most notably, predictive algorithms are easily skewed by arrest rates, where the combination of the data and bias in favoring certain factors over others in predictive policing has led to racial bias in such predictive policing methods.[3] Other forms of training data bias come in flawed data sampling, such as underrepresentation of minorities, leading to higher error rates and broad generalizations due to the small data set.[4]

As machines become increasingly omnipresent, discussions such as algorithmic bias and computer ethics have become more and more important in order to help prevent partial outcomes in high revenue industries such as healthcare and incarceration that significantly drive our economy and drastically affect lives. As many of these industries revert to a computer generated decision for a seemingly more objective selection, racial algorithm bias can result in potentially unfair disadvantages to certain racial groups. There are many instances of dire consequences of racial bias in algorithms.

Cases of Racial Algorithmic Bias in Industries

Healthcare

A physician and researcher at the UC Berkeley School of Public Health published a paper revealing that a major medical center’s algorithm used for identifying which patients needed extra medical care was racially biased. “The algorithm screened patients for enrollment in an intensive care management program, which gave them access to a dedicated hotline for a nurse practitioner, help refilling prescriptions, and so forth. The screening was meant to identify those patients who would most benefit from the program”[5]. However, it was found that the white patients that were identified by the algorithm to enroll in the program had “fewer chronic health conditions” than the black patients that were identified for the program. Namely, black patients who were considered for the program needed to be more ill than white patients who were considered for the program. In result of this, some patients that truly needed the treatment were pushed back and not enrolled in the program due to their racial background.

Legal System

There are also instances of racial algorithmic bias skewing decisions in the legal sector. For instance, Vernon Prater is a criminal, white male, who was charged for multiple armed and attempted armed robberies. He was also charged with a petty theft crime for shoplifting around $85 worth of supplies from Home Depot. Now let’s compare Vernon Prater, a white male, to Brisha Borden, a black female. Brisha Borden was charged with burglary and petty theft for picking up a bike and scooter (combined value of $80), which she quickly dropped once the owner came running.[6] When Prater and Borden were imprisoned, a computer algorithm was run to predict who was more likely to commit a crime again in the future. The algorithm incorrectly predicted that Borden would be much more likely to commit a crime again. After both inmates were released, Borden had not committed any new crimes, however, Prater had broken into a warehouse stealing thousands of dollars worth of supplies.

Another instance of racial profiling in the legal system can be found in predictive policing. Initially thought of as an innovative way to optimize police resource allocation, predictive policing ultimately proved to be a tool to automate racial biases that already exist within our policing system.[7] As with all machine learning and artificial intelligence algorithms, these models must be trained on historical data. By increasing policing in already over-policed areas, we see an increase in arrests for underrepresented communities based on racial profiling and over-policing. Another specific data collection called “Disproportionate Risks of Driving While Black” demonstrated that black drivers are significantly more likely to be stopped and searched while on the road. These biases can be fed into the algorithms used to implement predictive policing and lead to higher levels of racial profiling and disproportionate arrests.

Technology

Large tech firms have also continuously come under scrutiny for their lack of diversity in the workplace manifesting itself in the form of products that are racially insensitive. For example, in 2015, Google was the subject of a massive PR scandal when its Photos app auto-categorized two black women as "gorillas". This blunder was captured and made viral after Jacky Alciné, a Web developer, tweeted a photo of the miscategorization. [8]

Racial Automation with Apple Animoji
Another example of racial automation is smart cameras detecting Asian subjects as "blinking". Nikon and Sony cameras both came under scrutiny after reports of their "Someone blinked!" warning appeared multiple times in photos of Asian subjects.[9]


Sources of Racial Algorithmic Bias and Ethical Implications

In recognizing the presence of racial algorithmic bias in certain machines, the question of machine neutrality arises. “[Let’s call] the idea that technology itself is neutral with respect to consequences… the neutrality thesis”[10]. However, racial algorithmic bias confirms that algorithms currently reflect some degree of human bias. One of the sources of racial algorithmic bias is the creator of the machine. There is a great gender and race disparity in the field of engineering. According to the World Economic Forum, “only 22 percent of professionals with AI skills are female”[11]. The outcomes of the respective algorithms reflect that uniformity, and from this arises ML bias, such as racial algorithm bias. Secondly, racial algorithmic bias can arise from the way that data is collected. Machine learning uses data entries to teach the machine, and thus create an algorithm. However, because most experiments are crafted by humans, the majority of data is bound to have sampling and collection errors. Error in how the data was collected is also subject to human error. Skewed data that is used in the creation of an algorithm can give rise to partial results, and even racial algorithmic bias.

Because machines are being used more frequently in large industries and have the power to dictate the answers to significant questions, the implications of racial bias and inequitable algorithms are quite dire. As machines gain more agency, the ethical implications of discrimination that is built into algorithms become more important to discuss and work out.

Persona Development: One key factor that contributes to racial algorithmic bias is the development of personas, or profiles of the "ideal user" of a product, in the user experience design process. It is only in recent literature that designers and engineers are beginning to point out that attaching age, gender, or race to these personas rather than driving them entirely by user need can anchor creators in their own biases.[12]

Concern with Future Generations: As the younger generations turn to the Internet more so than its former counterparts, where the Gen-Z generation has become the first generation to not know the world without the Internet[13], a large concern for the present racial algorithmic bias is its effect on younger individuals and the shaping of their identity. A 2018 Pew Research Center study found that 95 percent of teens have access to a smartphone, where 45 percent describe themselves as being online "almost constantly", heightening the concern of racial algorithmic biases imposing long term psychological impacts on younger generations.[14] Potential outcomes of the psychological impact of racial algorithmic bias on youth of color include a decrease in sleep, academic performance, self esteem, and gene expression.[15] A recent notable example of racial algorithmic bias shaping youth is social media Tik Tok, where Tik Tok's content filtering algorithm has caused echo chambers of content where individuals on said feed all look alike. Concerns arise where this content filtering could potentially diminish the younger generation's capacity for empathy and further segregate racial groups from one another.[16]

References

  1. What Do We Do About the Biases in AI? (2019, October 25). Harvard Business Review. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
  2. Heaven, W. D. (2020, December 10). Predictive policing algorithms are racist. They need to be dismantled. MIT Technology Review. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
  3. Heaven, W. D. (2020, December 10). Predictive policing algorithms are racist. They need to be dismantled. MIT Technology Review. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
  4. What Do We Do About the Biases in AI? (2019, October 25). Harvard Business Review. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
  5. https://spectrum.ieee.org/the-human-os/biomedical/ethics/racial-bias-found-in-algorithms-that-determine-health-care-for-millions-of-patients
  6. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  7. lastname, firstname · (date) · Statement of Concern About Predictive Policing by ACLU and 16 Civil Rights Privacy, Racial Justice, and Technology Organizations · work · 04-16-2021
  8. lastname, firstname · (date) · Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms · work · 04-16-2021
  9. lastname, firstname · (date) · “Racist” Camera Phenomenon Explained — Almost · work · 04-16-2021
  10. https://www.cambridge.org/core/books/cambridge-handbook-of-information-and-computer-ethics/values-in-technology-and-disclosive-computer-ethics/4732B8AD60561EC8C171984E2F590C49
  11. https://futurism.com/ai-gender-gap-artificial-intelligence
  12. Template:Cite book
  13. Who uses social media the most? (2019, October 2). World Economic Forum. https://www.weforum.org/agenda/2019/10/social-media-use-by-generation/
  14. Anderson, M., & Jiang, J. (2018, May 31). Teens, Social Media & Technology 2018. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2018/05/31/teens-social-media-technology-2018/
  15. Epps-Darling, A. (2020, October 24). Racist Algorithms Are Especially Dangerous for Teens. The Atlantic. https://www.theatlantic.com/family/archive/2020/10/algorithmic-bias-especially-dangerous-teens/616793/
  16. Epps-Darling, A. (2020, October 24). Racist Algorithms Are Especially Dangerous for Teens. The Atlantic. https://www.theatlantic.com/family/archive/2020/10/algorithmic-bias-especially-dangerous-teens/616793/