Difference between revisions of "Racial Algorithmic Bias"

From SI410
Jump to: navigation, search
(adding headers for clarity)
(creating + adding to technology section)
Line 16: Line 16:
 
==== Legal System ====
 
==== Legal System ====
  
There are also instances of racial algorithmic bias skewing decisions in the legal sector. For instance, Vernon Prater is a criminal, white male, who was charged for multiple armed and attempted armed robberies. He was also charged for a petty theft crime for shoplifting around $85 worth of supplies from Home Depot. Now let’s compare Vernon Prater, a white male, to Brisha Borden, a black female. Brisha Borden was charged with burglary and petty theft for picking up a bike and scooter (combined value of $80), which she quickly dropped once the owner came running.<ref>https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing</ref> When Prater and Borden were imprisoned, a computer algorithm was run to predict who was more likely to commit a crime again in the future. The algorithm incorrectly predicted that Borden would be much more likely to commit a crime again. After both inmates were released, Borden had not committed any new crimes, however, Prater had broken into a warehouse stealing thousands of dollars worth of supplies.  
+
There are also instances of racial algorithmic bias skewing decisions in the legal sector. For instance, Vernon Prater is a criminal, white male, who was charged for multiple armed and attempted armed robberies. He was also charged with a petty theft crime for shoplifting around $85 worth of supplies from Home Depot. Now let’s compare Vernon Prater, a white male, to Brisha Borden, a black female. Brisha Borden was charged with burglary and petty theft for picking up a bike and scooter (combined value of $80), which she quickly dropped once the owner came running.<ref>https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing</ref> When Prater and Borden were imprisoned, a computer algorithm was run to predict who was more likely to commit a crime again in the future. The algorithm incorrectly predicted that Borden would be much more likely to commit a crime again. After both inmates were released, Borden had not committed any new crimes, however, Prater had broken into a warehouse stealing thousands of dollars worth of supplies.  
 
+
  
 +
Another instance of racial profiling in the legal system can be found in predictive policing. Initially thought of as an innovative way to optimize police resource allocation, predictive policing ultimately proved to be a tool to automate racial biases that already exist within our policing system.<ref>{{cite web|title=Statement of Concern About Predictive Policing by ACLU and 16 Civil Rights Privacy, Racial Justice, and Technology Organizations|url=https://www.aclu.org/other/statement-concern-about-predictive-policing-aclu-and-16-civil-rights-privacy-racial-justice|website=American Civil Liberties Union}}</ref> As with all machine learning and artificial intelligence algorithms, these models must be trained on historical data. By increasing policing in already over-policed areas, we see an increase in arrests for underrepresented communities based on racial profiling and over-policing. Another specific data collection called “Disproportionate Risks of Driving While Black” demonstrated that black drivers are significantly more likely to be stopped and searched while on the road. These biases can be fed into the algorithms used to implement predictive policing and lead to higher levels of racial profiling and disproportionate arrests.
  
 
==== Technology ====
 
==== Technology ====
  
 +
Large tech firms have also continuously come under scrutiny for their lack of diversity in the workplace manifesting itself in the form of products that are racially insensitive. For example, in 2015, Google was the subject of a massive PR scandal when its Photos app auto-categorized two black women as "gorillas". This blunder was captured and made viral after Jacky Alciné, a Web developer, tweeted a photo of the miscategorization. <ref>{{cite web|title=Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms|url=https://www.wsj.com/articles/BL-DGB-42522|website=Wall Street Journal}}</ref>
 +
 +
Another example of racial automation is smart cameras detecting Asian subjects as "blinking". Nikon and Sony cameras both came under scrutiny after reports of their "Someone blinked!" warning appeared multiple times in photos of Asian subjects.<ref>{{cite web|title=“Racist” Camera Phenomenon Explained — Almost|url=https://petapixel.com/2010/01/22/racist-camera-phenomenon-explained-almost/|website=PetaPixel}}</ref>
  
 
=== Sources of Racial Algorithmic Bias and Ethical Implications===
 
=== Sources of Racial Algorithmic Bias and Ethical Implications===

Revision as of 13:35, 15 April 2021

This article will outline three key aspects of the topic racial algorithmic bias. First, the article will give information about the description of racial algorithmic bias in machines and will briefly touch on the possible reasons for the presence of racial algorithmic bias. Next, the article will outline instances of racial algorithmic bias present in the machines that are utilized by top performing industries today. And lastly, the article will define possible ethical issues that give birth to racial algorithmic bias and the ethical implications of racial algorithmic bias.

Racial Algorithmic Bias

Back • ↑Topics • ↑Categories

R

acial algorithmic bias refers to errors present in algorithms that skew results and create unfair advantages and partial results for certain racial groups. To begin, machines learn through the use of algorithms, which are a set of computer implementable instructions used to solve a problem. Racial algorithmic bias can stem from errors in either the design of the machine or algorithm or sampling errors in the data that is used for machine learning to develop a computer algorithm. One of the plausible reasons that racial algorithmic bias arises is because machines are subject to the minds of their creators, human beings, and learn through real time data that may be erroneous or partial. Because of our historic roots that have paved the way to racial bias in our real world, real world data that is being used to create algorithms can be skewed, and thus may create a biased algorithm. All in all, human minds are subjective, and therefore, racial bias remains an issue that we have faced for decades. However, many algorithms that dictate important decisions for millions of communities may enshroud racial biases and therefore can unfairly decide on monumental decisions for many. As machines become increasingly omnipresent, discussions such as algorithmic bias and computer ethics have become more and more important in order to help prevent partial outcomes in high revenue industries such as healthcare and incarceration that significantly drive our economy and drastically affect lives. As many of these industries revert to a computer generated decision for a seemingly more objective selection, racial algorithm bias can result in potentially unfair disadvantages to certain racial groups. There are many instances of dire consequences of racial bias in algorithms.

Cases of Racial Algorithmic Bias in Industries

Healthcare

A physician and researcher at the UC Berkeley School of Public Health published a paper revealing that a major medical center’s algorithm used for identifying which patients needed extra medical care was racially biased. “The algorithm screened patients for enrollment in an intensive care management program, which gave them access to a dedicated hotline for a nurse practitioner, help refilling prescriptions, and so forth. The screening was meant to identify those patients who would most benefit from the program”[1]. However, it was found that the white patients that were identified by the algorithm to enroll in the program had “fewer chronic health conditions” than the black patients that were identified for the program. Namely, black patients who were considered for the program needed to be more ill than white patients who were considered for the program. In result of this, some patients that truly needed the treatment were pushed back and not enrolled in the program due to their racial background.

Legal System

There are also instances of racial algorithmic bias skewing decisions in the legal sector. For instance, Vernon Prater is a criminal, white male, who was charged for multiple armed and attempted armed robberies. He was also charged with a petty theft crime for shoplifting around $85 worth of supplies from Home Depot. Now let’s compare Vernon Prater, a white male, to Brisha Borden, a black female. Brisha Borden was charged with burglary and petty theft for picking up a bike and scooter (combined value of $80), which she quickly dropped once the owner came running.[2] When Prater and Borden were imprisoned, a computer algorithm was run to predict who was more likely to commit a crime again in the future. The algorithm incorrectly predicted that Borden would be much more likely to commit a crime again. After both inmates were released, Borden had not committed any new crimes, however, Prater had broken into a warehouse stealing thousands of dollars worth of supplies.

Another instance of racial profiling in the legal system can be found in predictive policing. Initially thought of as an innovative way to optimize police resource allocation, predictive policing ultimately proved to be a tool to automate racial biases that already exist within our policing system.[3] As with all machine learning and artificial intelligence algorithms, these models must be trained on historical data. By increasing policing in already over-policed areas, we see an increase in arrests for underrepresented communities based on racial profiling and over-policing. Another specific data collection called “Disproportionate Risks of Driving While Black” demonstrated that black drivers are significantly more likely to be stopped and searched while on the road. These biases can be fed into the algorithms used to implement predictive policing and lead to higher levels of racial profiling and disproportionate arrests.

Technology

Large tech firms have also continuously come under scrutiny for their lack of diversity in the workplace manifesting itself in the form of products that are racially insensitive. For example, in 2015, Google was the subject of a massive PR scandal when its Photos app auto-categorized two black women as "gorillas". This blunder was captured and made viral after Jacky Alciné, a Web developer, tweeted a photo of the miscategorization. [4]

Another example of racial automation is smart cameras detecting Asian subjects as "blinking". Nikon and Sony cameras both came under scrutiny after reports of their "Someone blinked!" warning appeared multiple times in photos of Asian subjects.[5]

Sources of Racial Algorithmic Bias and Ethical Implications

In recognizing the presence of racial algorithmic bias in certain machines, the question of machine neutrality arises. “[Let’s call] the idea that technology itself is neutral with respect to consequences… the neutrality thesis”[6]. However, racial algorithmic bias confirms that algorithms currently reflect some degree of human bias. One of the sources of racial algorithmic bias is the creator of the machine. There is a great gender and race disparity in the field of engineering. According to the World Economic Forum, “only 22 percent of professionals with AI skills are female”[7]. The outcomes of the respective algorithms reflect that uniformity, and from this arises ML bias, such as racial algorithm bias. Secondly, racial algorithmic bias can arise from the way that data is collected. Machine learning uses data entries to teach the machine, and thus create an algorithm. However, because most experiments are crafted by humans, the majority of data is bound to have sampling and collection errors. Error in how the data was collected is also subject to human error. Skewed data that is used in the creation of an algorithm can give rise to partial results, and even racial algorithmic bias.

Because machines are being used more frequently in large industries and have the power to dictate the answers to significant questions, the implications of racial bias and inequitable algorithms are quite dire. As machines gain more agency, the ethical implications of discrimination that is built in to algorithms become more important to discuss and work out.

References

  1. https://spectrum.ieee.org/the-human-os/biomedical/ethics/racial-bias-found-in-algorithms-that-determine-health-care-for-millions-of-patients
  2. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  3. lastname, firstname · (date) · Statement of Concern About Predictive Policing by ACLU and 16 Civil Rights Privacy, Racial Justice, and Technology Organizations · work · 04-15-2021
  4. lastname, firstname · (date) · Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms · work · 04-15-2021
  5. lastname, firstname · (date) · “Racist” Camera Phenomenon Explained — Almost · work · 04-15-2021
  6. https://www.cambridge.org/core/books/cambridge-handbook-of-information-and-computer-ethics/values-in-technology-and-disclosive-computer-ethics/4732B8AD60561EC8C171984E2F590C49
  7. https://futurism.com/ai-gender-gap-artificial-intelligence