Difference between revisions of "Racial Algorithmic Bias"

From SI410
Jump to: navigation, search
Line 18: Line 18:
  
 
=== References ===
 
=== References ===
 +
 +
[[Category:2020New]]
 +
[[Category:2020Concept]]

Revision as of 13:53, 17 March 2020

Racial Algorithmic Bias

Racial algorithmic bias refers to errors present in algorithms that skew results and create unfair advantages and partial results for certain racial groups. Machines learn through the use of algorithms, which are a set of computer implementable instructions used to solve a problem. Racial algorithmic bias can stem from errors in either the design of the machine or sampling errors in data that is used for machine learning. Machines are subject to the minds of their creators, human beings, and learn through real time data. Because human minds are subjective, and racial bias remains an issue that we have faced for decades, many algorithms that dictate important decisions for millions enshroud racial biases.

As machines become increasingly omnipresent, discussions such as algorithmic bias and computer ethics have become more and more important in order to help prevent partial outcomes in high revenue industries such as Healthcare and Prison that significantly drive our economy and drastically affect lives. As many of these industries revert to a computer generated decision for a seemingly more objective selection, racial algorithm bias can result in potentially unfair disadvantages to certain racial groups. There are many instances of dire consequences of racial bias in algorithms.

Cases of Racial Algorithmic Bias in Industries

For example, a physician and researcher at the UC Berkeley School of Public Health published a paper revealing that a major medical center’s algorithm used for identifying which patients needed extra medical care was racially biased. “The algorithm screened patients for enrollment in an intensive care management program, which gave them access to a dedicated hotline for a nurse practitioner, help refilling prescriptions, and so forth. The screening was meant to identify those patients who would most benefit from the program”[1]. However, it was found that the white patients that were identified by the algorithm to enroll in the program had “fewer chronic health conditions” than the black patients that were identified for the program. Namely, black patients who were considered for the program needed to be more ill than white patients who were considered for the program. In result of this, some patients that truly needed the treatment were pushed back and not enrolled in the program due to their racial background.


Vernon Prater is a criminal, white male, who was charged for multiple armed and attempted armed robberies. Additionally, he was charged for a petty theft crime for shoplifting around $85 worth of supplies from Home Depot. Now let’s compare Vernon Prater, a white male, to Brisha Borden, a black female. Brisha Borden was charged with burglary and petty theft for picking up a bike and scooter (combined value of $80), which she quickly dropped once the owner came running. When Prater and Borden went to jail, a computer algorithm predicted who was more likely to commit a crime again down the line. The algorithm predicted that Borden would be much more likely to commit a crime again. The algorithm was miserably wrong. After both were let out of jail, Borden had not committed any new crimes, however Prater had broke into a warehouse stealing thousands of dollars worth of supplies."[2] The answer? The algorithm’s racial bias.

Sources of Racial Algorithmic Bias

In recognizing the presence of racial algorithmic bias in certain machines, the question of machine neutrality arises. “[Let’s call] the idea that technology itself is neutral with respect to consequences… the neutrality thesis”[3]. However, racial algorithmic bias confirms that algorithms currently reflect some degree of human bias. One of the sources of racial algorithmic bias is the creator of the machine. There is a great gender and race disparity in the field of engineering. According to the World Economic Forum, “only 22 percent of professionals with AI skills are female”[4]. The outcomes of the respective algorithms reflect that uniformity, and from this arises ML bias, such as racial algorithm bias. Secondly, racial algorithmic bias can arise from the way that data is collected. Machine learning uses data entries to teach the machine, and thus create an algorithm. However, because most experiments are crafted by humans, the majority of data is bound to have sampling and collection errors. Error in how the data was collected is also subject to human error. Skewed data that is used in the creation of an algorithm can give rise to partial results, and even racial algorithmic bias.

Because machines are being used more frequently in large industries and have the power to dictate the answers to significant questions, the implications of racial bias and inequitable algorithms are quite dire. As machines gain more agency, the ethical implications of discrimination that is built in to algorithms become more important to discuss and work out.

References

  1. https://spectrum.ieee.org/the-human-os/biomedical/ethics/racial-bias-found-in-algorithms-that-determine-health-care-for-millions-of-patients
  2. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  3. https://www.cambridge.org/core/books/cambridge-handbook-of-information-and-computer-ethics/values-in-technology-and-disclosive-computer-ethics/4732B8AD60561EC8C171984E2F590C49
  4. https://futurism.com/ai-gender-gap-artificial-intelligence