Difference between revisions of "Racial Algorithmic Bias"

From SI410
Jump to: navigation, search
(grammar edits, trimming opinionated verbiage, fixing readability)
(adding headers for clarity)
Line 10: Line 10:
  
 
=== Cases of Racial Algorithmic Bias in Industries ===
 
=== Cases of Racial Algorithmic Bias in Industries ===
 +
==== Healthcare ====
  
For example, a physician and researcher at the UC Berkeley School of Public Health published a paper revealing that a major medical center’s algorithm used for identifying which patients needed extra medical care was racially biased. “The algorithm screened patients for enrollment in an intensive care management program, which gave them access to a dedicated hotline for a nurse practitioner, help refilling prescriptions, and so forth. The screening was meant to identify those patients who would most benefit from the program”<ref>https://spectrum.ieee.org/the-human-os/biomedical/ethics/racial-bias-found-in-algorithms-that-determine-health-care-for-millions-of-patients</ref>. However, it was found that the white patients that were identified by the algorithm to enroll in the program had “fewer chronic health conditions” than the black patients that were identified for the program. Namely, black patients who were considered for the program needed to be more ill than white patients who were considered for the program. In result of this, some patients that truly needed the treatment were pushed back and not enrolled in the program due to their racial background.
+
A physician and researcher at the UC Berkeley School of Public Health published a paper revealing that a major medical center’s algorithm used for identifying which patients needed extra medical care was racially biased. “The algorithm screened patients for enrollment in an intensive care management program, which gave them access to a dedicated hotline for a nurse practitioner, help refilling prescriptions, and so forth. The screening was meant to identify those patients who would most benefit from the program”<ref>https://spectrum.ieee.org/the-human-os/biomedical/ethics/racial-bias-found-in-algorithms-that-determine-health-care-for-millions-of-patients</ref>. However, it was found that the white patients that were identified by the algorithm to enroll in the program had “fewer chronic health conditions” than the black patients that were identified for the program. Namely, black patients who were considered for the program needed to be more ill than white patients who were considered for the program. In result of this, some patients that truly needed the treatment were pushed back and not enrolled in the program due to their racial background.
 +
 
 +
==== Legal System ====
  
 
There are also instances of racial algorithmic bias skewing decisions in the legal sector. For instance, Vernon Prater is a criminal, white male, who was charged for multiple armed and attempted armed robberies. He was also charged for a petty theft crime for shoplifting around $85 worth of supplies from Home Depot. Now let’s compare Vernon Prater, a white male, to Brisha Borden, a black female. Brisha Borden was charged with burglary and petty theft for picking up a bike and scooter (combined value of $80), which she quickly dropped once the owner came running.<ref>https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing</ref> When Prater and Borden were imprisoned, a computer algorithm was run to predict who was more likely to commit a crime again in the future. The algorithm incorrectly predicted that Borden would be much more likely to commit a crime again. After both inmates were released, Borden had not committed any new crimes, however, Prater had broken into a warehouse stealing thousands of dollars worth of supplies.  
 
There are also instances of racial algorithmic bias skewing decisions in the legal sector. For instance, Vernon Prater is a criminal, white male, who was charged for multiple armed and attempted armed robberies. He was also charged for a petty theft crime for shoplifting around $85 worth of supplies from Home Depot. Now let’s compare Vernon Prater, a white male, to Brisha Borden, a black female. Brisha Borden was charged with burglary and petty theft for picking up a bike and scooter (combined value of $80), which she quickly dropped once the owner came running.<ref>https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing</ref> When Prater and Borden were imprisoned, a computer algorithm was run to predict who was more likely to commit a crime again in the future. The algorithm incorrectly predicted that Borden would be much more likely to commit a crime again. After both inmates were released, Borden had not committed any new crimes, however, Prater had broken into a warehouse stealing thousands of dollars worth of supplies.  
 +
 +
 +
 +
==== Technology ====
  
  

Revision as of 13:04, 15 April 2021

This article will outline three key aspects of the topic racial algorithmic bias. First, the article will give information about the description of racial algorithmic bias in machines and will briefly touch on the possible reasons for the presence of racial algorithmic bias. Next, the article will outline instances of racial algorithmic bias present in the machines that are utilized by top performing industries today. And lastly, the article will define possible ethical issues that give birth to racial algorithmic bias and the ethical implications of racial algorithmic bias.

Racial Algorithmic Bias

Back • ↑Topics • ↑Categories

R

acial algorithmic bias refers to errors present in algorithms that skew results and create unfair advantages and partial results for certain racial groups. To begin, machines learn through the use of algorithms, which are a set of computer implementable instructions used to solve a problem. Racial algorithmic bias can stem from errors in either the design of the machine or algorithm or sampling errors in the data that is used for machine learning to develop a computer algorithm. One of the plausible reasons that racial algorithmic bias arises is because machines are subject to the minds of their creators, human beings, and learn through real time data that may be erroneous or partial. Because of our historic roots that have paved the way to racial bias in our real world, real world data that is being used to create algorithms can be skewed, and thus may create a biased algorithm. All in all, human minds are subjective, and therefore, racial bias remains an issue that we have faced for decades. However, many algorithms that dictate important decisions for millions of communities may enshroud racial biases and therefore can unfairly decide on monumental decisions for many. As machines become increasingly omnipresent, discussions such as algorithmic bias and computer ethics have become more and more important in order to help prevent partial outcomes in high revenue industries such as healthcare and incarceration that significantly drive our economy and drastically affect lives. As many of these industries revert to a computer generated decision for a seemingly more objective selection, racial algorithm bias can result in potentially unfair disadvantages to certain racial groups. There are many instances of dire consequences of racial bias in algorithms.

Cases of Racial Algorithmic Bias in Industries

Healthcare

A physician and researcher at the UC Berkeley School of Public Health published a paper revealing that a major medical center’s algorithm used for identifying which patients needed extra medical care was racially biased. “The algorithm screened patients for enrollment in an intensive care management program, which gave them access to a dedicated hotline for a nurse practitioner, help refilling prescriptions, and so forth. The screening was meant to identify those patients who would most benefit from the program”[1]. However, it was found that the white patients that were identified by the algorithm to enroll in the program had “fewer chronic health conditions” than the black patients that were identified for the program. Namely, black patients who were considered for the program needed to be more ill than white patients who were considered for the program. In result of this, some patients that truly needed the treatment were pushed back and not enrolled in the program due to their racial background.

Legal System

There are also instances of racial algorithmic bias skewing decisions in the legal sector. For instance, Vernon Prater is a criminal, white male, who was charged for multiple armed and attempted armed robberies. He was also charged for a petty theft crime for shoplifting around $85 worth of supplies from Home Depot. Now let’s compare Vernon Prater, a white male, to Brisha Borden, a black female. Brisha Borden was charged with burglary and petty theft for picking up a bike and scooter (combined value of $80), which she quickly dropped once the owner came running.[2] When Prater and Borden were imprisoned, a computer algorithm was run to predict who was more likely to commit a crime again in the future. The algorithm incorrectly predicted that Borden would be much more likely to commit a crime again. After both inmates were released, Borden had not committed any new crimes, however, Prater had broken into a warehouse stealing thousands of dollars worth of supplies.


Technology

Sources of Racial Algorithmic Bias and Ethical Implications

In recognizing the presence of racial algorithmic bias in certain machines, the question of machine neutrality arises. “[Let’s call] the idea that technology itself is neutral with respect to consequences… the neutrality thesis”[3]. However, racial algorithmic bias confirms that algorithms currently reflect some degree of human bias. One of the sources of racial algorithmic bias is the creator of the machine. There is a great gender and race disparity in the field of engineering. According to the World Economic Forum, “only 22 percent of professionals with AI skills are female”[4]. The outcomes of the respective algorithms reflect that uniformity, and from this arises ML bias, such as racial algorithm bias. Secondly, racial algorithmic bias can arise from the way that data is collected. Machine learning uses data entries to teach the machine, and thus create an algorithm. However, because most experiments are crafted by humans, the majority of data is bound to have sampling and collection errors. Error in how the data was collected is also subject to human error. Skewed data that is used in the creation of an algorithm can give rise to partial results, and even racial algorithmic bias.

Because machines are being used more frequently in large industries and have the power to dictate the answers to significant questions, the implications of racial bias and inequitable algorithms are quite dire. As machines gain more agency, the ethical implications of discrimination that is built in to algorithms become more important to discuss and work out.

References

  1. https://spectrum.ieee.org/the-human-os/biomedical/ethics/racial-bias-found-in-algorithms-that-determine-health-care-for-millions-of-patients
  2. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  3. https://www.cambridge.org/core/books/cambridge-handbook-of-information-and-computer-ethics/values-in-technology-and-disclosive-computer-ethics/4732B8AD60561EC8C171984E2F590C49
  4. https://futurism.com/ai-gender-gap-artificial-intelligence