Difference between revisions of "Algorithmic Justice League"

From SI410
Jump to: navigation, search
(info box)
(updates infobox)
Line 4: Line 4:
 
{| border="0" style="float:center"
 
{| border="0" style="float:center"
 
|+
 
|+
|align="center" width="300px"|[[Image:{{{IMAGE|ajl.png}}}|frameless|center|250px|]]
+
|align="center" width="300px"|[[Image:{{{IMAGE|ajl.png}}}|frameless|center|200px|]]
 
|-
 
|-
 
|align="center" style="font-size:80%"|{{{CAPTION|Algorithmic Justice League Logo}}}
 
|align="center" style="font-size:80%"|{{{CAPTION|Algorithmic Justice League Logo}}}
Line 11: Line 11:
 
|- valign="top"
 
|- valign="top"
 
|- style="vertical-align:top;"
 
|- style="vertical-align:top;"
|'''Founded'''
+
|'''Abbreviation'''
|{{{Founded|October 2016}}}
+
|{{{Abbreviation|AJL}}}
 
|- style="vertical-align:top;"
 
|- style="vertical-align:top;"
 
|'''Founder'''
 
|'''Founder'''
 
|{{{Founder|Joy Buolamwini}}}
 
|{{{Founder|Joy Buolamwini}}}
 
|- style="vertical-align:top;"
 
|- style="vertical-align:top;"
|'''Threat'''
+
|'''Established'''
|{{{THREAT|threat}}}
+
|{{{Established|October 2016}}}
|- style="vertical-align:top;"
+
|'''Vulnerability'''
+
|{{{VULNERABILITY|vulnerability}}}
+
 
|- style="vertical-align:top;"
 
|- style="vertical-align:top;"
 
|'''Mission'''
 
|'''Mission'''
|{{{MISSION|Through a combination of art, research, policy guidance and media advocacy, the Algorithmic Justice League is leading a cultural movement towards equitable and accountable AI. This requires us to look at how AI systems are developed and to actively prevent the harmful use of AI systems. We aim to empower communities and galvanize decision makers to take action that mitigates the harms and biases of AI.}}}
+
|{{{MISSION|The Algorithmic Justice League is leading a cultural movement towards ''equitable and accountable AI.''}}}
 
|- style="vertical-align:top;"
 
|- style="vertical-align:top;"
|'''Exploit'''
+
|'''Website'''
|{{{EXPLOIT|exploit}}}
+
|{{{WEBSITE|https://www.ajlunited.org}}}
 +
|- style="vertical-align:top;"
 +
|'''Take Action'''
 +
|{{{Take Action|https://www.ajlunited.org/take-action}}}
 
|- style="vertical-align:top;"
 
|- style="vertical-align:top;"
 
|}
 
|}
 
|}
 
|}
[[File:Ajl.png|250px|thumb|Algorithmic Justice League]]
 
 
[[File:joyb.jpg|250px|thumb|MIT researcher, poet and computer scientist Joy Buolamwini]]
 
[[File:joyb.jpg|250px|thumb|MIT researcher, poet and computer scientist Joy Buolamwini]]
 
The [https://www.ajlunited.org Algorithmic Justice league] works to address the implications and harm caused by coded biases in automated systems. These biases affect many people and make it difficult to utilize products. The Algorithmic Justice League identifies, mitigates, and highlights algorithmic bias. Joy Buolamwini experienced this when using facial recognition software which would not detect her darker-skinned face, however, it would detect a hand-drawn or white person’s face. This prompted her to address the needs of many technology users who come across coded biases because machine learning algorithms are being incorporated into our everyday lives where it makes important decisions about access to loans, jail time, college admission, etc.
 
The [https://www.ajlunited.org Algorithmic Justice league] works to address the implications and harm caused by coded biases in automated systems. These biases affect many people and make it difficult to utilize products. The Algorithmic Justice League identifies, mitigates, and highlights algorithmic bias. Joy Buolamwini experienced this when using facial recognition software which would not detect her darker-skinned face, however, it would detect a hand-drawn or white person’s face. This prompted her to address the needs of many technology users who come across coded biases because machine learning algorithms are being incorporated into our everyday lives where it makes important decisions about access to loans, jail time, college admission, etc.
Line 37: Line 36:
 
== History ==
 
== History ==
  
[[File:Mask.png|300px|thumb|Joy Buolamwini with White Mask]] The Algorithmic Justice League, founded by Joy Buolamwini, works to highlight the social implications and harms caused by artificial intelligence. Buolamwini faced first-hand algorithmic discrimination when her face was not detected by the facial analysis software. In order to be recognized by the machine, she had to wear a white mask over her face. The Algorithmic Justice League started by unmasking Bias in facial recognition technology. Uncovering gender, race, and skin color bias in products from companies such as Amazon, IBM, and Microsoft. These biases from automated systems affect a large portion of people and are made during the coding process.  
+
The Algorithmic Justice League, founded by Joy Buolamwini, works to highlight the social implications and harms caused by artificial intelligence. Buolamwini faced first-hand algorithmic discrimination when her face was not detected by the facial analysis software. In order to be recognized by the machine, she had to wear a white mask over her face. The Algorithmic Justice League started by unmasking Bias in facial recognition technology. Uncovering gender, race, and skin color bias in products from companies such as Amazon, IBM, and Microsoft. These biases from automated systems affect a large portion of people and are made during the coding process.  
  
  
 
== Cases of Algorithmic Bias ==
 
== Cases of Algorithmic Bias ==
 +
[[File:Borden.png|200px|thumb|Borden was rated high risk for future crime after she and a friend took a kid’s bike and scooter that were sitting outside. She did not reoffend.]]
  
 
=== Law Enforcement Risk Assessment ===
 
=== Law Enforcement Risk Assessment ===
<blockquote class="toccolours" style="float:none; padding: 10px 15px 10px 15px; display:table;">
 
FACIAL RECOGNITION TECHNOLOGIES
 
Facial recognition tools sold by large tech companies including IBM, Microsoft, and Amazon, are racially and gender biased. They have even failed to correctly classify the faces of icons like Oprah Winfrey, Michelle Obama, and Serena Williams. Around the world, these tools  are being deployed raising concerns of mass surveillance.
 
 
EMPLOYMENT
 
An award-winning and beloved teacher encounters injustice with an automated assessment tool, exposing the risk of relying on artificial intelligence to judge human excellence.
 
 
HOUSING
 
A building management company in Brooklyn plans to implement facial recognition software for tenants to use to enter their homes, but the community fights back.
 
 
CRIMINAL JUSTICE
 
Despite working hard to contribute to society, a returning citizen finds her efforts in jeopardy due to law enforcement risk assessment tools. The criminal justice  system is already riddled with racial injustice and biased algorithms are accentuating this.
 
</blockquote>
 
 
 
==== Sentencing ====
 
==== Sentencing ====
[[File:Borden.png|200px|thumb|Borden was rated high risk for future crime after she and a friend took a kid’s bike and scooter that were sitting outside. She did not reoffend.]]
 
 
Law enforcement across the United States is utilizing machine learning to generate risk scores to determine prison sentences. Theses scores are common in courtrooms. “They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about the defendant’s freedom. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such assessments are given to judges during criminal sentencing.”<ref>Angwin, Julia, et al. “Machine Bias.” ProPublica, 9 Mar. 2019, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. </ref>. This tool was used to asses two cases, one where an 18-year-old black woman and a friend took a scooter and a bike and the other case was a 41-year-old white man who repeatedly shoplifted and was caught again. The woman was given a high-risk score of 8 and the man was given a low-risk score of 3. This was a clear case of algorithmic bias in the sense that she had never committed a crime before but received a higher risk assessment score, however, the man has been a repeat offended since he was a Juvenile and received a lower score.  
 
Law enforcement across the United States is utilizing machine learning to generate risk scores to determine prison sentences. Theses scores are common in courtrooms. “They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about the defendant’s freedom. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such assessments are given to judges during criminal sentencing.”<ref>Angwin, Julia, et al. “Machine Bias.” ProPublica, 9 Mar. 2019, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. </ref>. This tool was used to asses two cases, one where an 18-year-old black woman and a friend took a scooter and a bike and the other case was a 41-year-old white man who repeatedly shoplifted and was caught again. The woman was given a high-risk score of 8 and the man was given a low-risk score of 3. This was a clear case of algorithmic bias in the sense that she had never committed a crime before but received a higher risk assessment score, however, the man has been a repeat offended since he was a Juvenile and received a lower score.  
  

Revision as of 05:08, 27 March 2020

Algorithmic Justice League
Ajl.png
Algorithmic Justice League Logo
Abbreviation AJL
Founder Joy Buolamwini
Established October 2016
Mission The Algorithmic Justice League is leading a cultural movement towards equitable and accountable AI.
Website https://www.ajlunited.org
Take Action https://www.ajlunited.org/take-action
MIT researcher, poet and computer scientist Joy Buolamwini

The Algorithmic Justice league works to address the implications and harm caused by coded biases in automated systems. These biases affect many people and make it difficult to utilize products. The Algorithmic Justice League identifies, mitigates, and highlights algorithmic bias. Joy Buolamwini experienced this when using facial recognition software which would not detect her darker-skinned face, however, it would detect a hand-drawn or white person’s face. This prompted her to address the needs of many technology users who come across coded biases because machine learning algorithms are being incorporated into our everyday lives where it makes important decisions about access to loans, jail time, college admission, etc.

History

The Algorithmic Justice League, founded by Joy Buolamwini, works to highlight the social implications and harms caused by artificial intelligence. Buolamwini faced first-hand algorithmic discrimination when her face was not detected by the facial analysis software. In order to be recognized by the machine, she had to wear a white mask over her face. The Algorithmic Justice League started by unmasking Bias in facial recognition technology. Uncovering gender, race, and skin color bias in products from companies such as Amazon, IBM, and Microsoft. These biases from automated systems affect a large portion of people and are made during the coding process.


Cases of Algorithmic Bias

Borden was rated high risk for future crime after she and a friend took a kid’s bike and scooter that were sitting outside. She did not reoffend.

Law Enforcement Risk Assessment

Sentencing

Law enforcement across the United States is utilizing machine learning to generate risk scores to determine prison sentences. Theses scores are common in courtrooms. “They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about the defendant’s freedom. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such assessments are given to judges during criminal sentencing.”[1]. This tool was used to asses two cases, one where an 18-year-old black woman and a friend took a scooter and a bike and the other case was a 41-year-old white man who repeatedly shoplifted and was caught again. The woman was given a high-risk score of 8 and the man was given a low-risk score of 3. This was a clear case of algorithmic bias in the sense that she had never committed a crime before but received a higher risk assessment score, however, the man has been a repeat offended since he was a Juvenile and received a lower score.

Unreliable Scoring

In 2014, then U.S. Attorney General Eric Holder warned that the risk scores might be injecting bias into the courts. He called for the U.S. Sentencing Commission to study their use. “Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice,” he said, adding, “they
These charts show that scores for white defendants were skewed toward lower-risk categories. Scores for black defendants were not. (Source: ProPublica analysis of data from Broward County, Fla.)
may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”[2] The algorithm was found to be unreliable when used to determine the risk score of about 7,000 people arrested in Broward Country, Florida to see if they would commit another crime. Only 20 percent of the people predicted to commit a crime went on to do so. White defendants were mislabeled as low risk more than black defendants.[3]

For a tool to be used that would determine the fate of people, it should be more accurate when determining if an offender will commit another crime in the future. The United States disproportionately locks up people of color, especially black people, in prisons compared to their white counterparts. This technology needs to be reevaluated so that offenders are not being given incorrect risk assessment scores as what happened to the 18-year-old woman mentioned earlier.


Healthcare Biases

Training Machine Learning Algorithms

When deep-learning systems are used to determine health issues, such as breast cancer, engineers are tasks with providing the machine with data including images and the resulting diagnosis from those images determining if the patient has cancer or not. "Datasets collected in North America are purely reflective and lead to lower performance in different parts of Africa and Asia, and vice versa, as certain genetic conditions are more common in certain groups than others," says Alexander Wong, co-founder and chief scientist at DarwinAI. [4] This is due to the lack of training these machine learning algorithms have with skin tones other than those of white or fairer skinned people.

Utilizing Open Source Repositories

Melanoma from ISIC

A study from Germany that tested machine learning software in dermatology, found Deep-learning convolutional neural networks, CNN, outperformed the 58 dermatologists in the study when it came to identifying skin cancers. [5] The CNN used data taken from the International Skin Imaging Collaboration, ISIC, an open-source repository of thousands of skin images for machine learning algorithms. Digital images of skin lesions can be used to educate professionals and the public in melanoma recognition as well as direct aid in the diagnosis of melanoma through teledermatology, clinical decision support, and automated diagnosis. Currently, a lack of standards for dermatologic imaging undermines the quality and usefulness of skin lesion imaging. [6] If configured with the correct information, these machines could help save the lives of many people. Over 9,000 Americans die of melanoma each year. The need to improve the efficiency, effectiveness, and accuracy of melanoma diagnosis is clear. The personal and financial costs of failing to diagnose melanoma early are considerable. [7]

Algorithmic Justice League's Approach

AI Harms Credit: Courtesy of Megan Smith (former Chief Technology Officer of the USA)

The Algorithmic Justice League mitigates the social implications and biases in Artificial Intelligence by promoting the following 4 core principles: affirmative consent, meaningful transparency, continuous oversight and accountability, and questionable critique. [8] In order to ensure that biases are not coded into the programs we use, teams working on building these deep-learning machines should be diverse and utilize inclusive and ethical practices when designing and building these algorithms.


References

  1. Angwin, Julia, et al. “Machine Bias.” ProPublica, 9 Mar. 2019, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  2. Angwin, Julia, et al. “Machine Bias.” ProPublica, 9 Mar. 2019, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  3. Angwin, Julia, et al. “Machine Bias.” ProPublica, 9 Mar. 2019, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  4. Dickson, Ben. “Healthcare Algorithms Are Biased, and the Results Can Be Deadly.” PCMAG, PCMag, 23 Jan. 2020, www.pcmag.com/opinions/healthcare-algorithms-are-biased-and-the-results-can-be-deadly.
  5. Haenssle, Holger A., et al. "Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists." Annals of Oncology 29.8 (2018): 1836-1842.
  6. “ISIC Archive.” ISIC Archive, www.isic-archive.com/#!/topWithHeader/tightContentTop/about/isicArchive.
  7. “ISIC Archive.” ISIC Archive, www.isic-archive.com/#!/topWithHeader/tightContentTop/about/isicArchive.
  8. “Mission, Team and Story - The Algorithmic Justice League.” Mission, Team and Story - The Algorithmic Justice League, www.ajlunited.org/about.