Algorithmic Justice League

From SI410
Jump to: navigation, search
Algorithmic Justice League
Ajl.png
Algorithmic Justice League Logo
Abbreviation AJL
Founder Joy Buolamwini
Established October 2016
Mission The Algorithmic Justice League is leading a cultural movement towards equitable and accountable AI.
Website https://www.ajlunited.org
Take Action https://www.ajlunited.org/take-action

The Algorithmic Justice league, or AJL, works to address the implications and harm caused by coded biases in automated systems. These biases affect many people and make it difficult to utilize products. The AJL identifies, mitigates, and highlights algorithmic bias. Joy Buolamwini experienced this when using facial recognition software that would not detect her darker-skinned face; however, it would detect a hand-drawn or white person’s face. This prompted her to address the needs of many technology users who come across coded biases because machine learning algorithms are being incorporated into our everyday lives where they make important decisions about access to loans, jail time, college admissions, etc. 

The ethical dilemmas raised by algorithmic bias vary from issues with facial recognition software classifying dark-skinned people as gorillas to software determining recidivism rates and risk scores in incarcerated people. Instead of being shamed, the Algorithmic Justice League encourages people to take action by exposing AI harms and biases to effect change.


History

MIT researcher, poet, and computer scientist Joy Buolamwini

The Algorithmic Justice League, founded by Joy Buolamwini, an MIT Media Lab computer scientist, works to highlight the social implications and harms caused by artificial intelligence. Buolamwini faced first-hand algorithmic discrimination when her face was not detected by the facial analysis software. To be recognized by the machine, she had to wear a white mask over her face. The Algorithmic Justice League started by unmasking bias in facial recognition technology. Uncovering gender, race, and skin color bias in products from companies such as Amazon, IBM, and Microsoft. These biases from automated systems affect a large portion of the population and are introduced during the coding process.

Algorithmic Justice League's Approach

AI Harms Credit: Courtesy of Megan Smith (former Chief Technology Officer of the USA)

The Algorithmic Justice League mitigates the social implications and biases of artificial intelligence by promoting the following 4 core principles: affirmative consent, meaningful transparency, continuous oversight and accountability, and questionable critique. [1] To ensure that biases are not coded into the programs we use, teams working on building these deep-learning machines should be diverse and utilize inclusive and ethical practices when designing and building these algorithms.

Coded Bias: Personal Stories

"Everybody has unconscious biases, and people embed their own biases into technology," says Meredith Broussard, a data journalism professor at New York University and author of the book Artificial Unintelligence. Your view of the world is being governed by artificial intelligence. In this documentary, Buolamwini looks to discover what it means to be in a society where artificial intelligence is increasingly governing the liberties we might have. And what does it mean if people are discriminated against?  

Facial Recognition Technologies Facial recognition tools sold by large tech companies, including Amazon, Face++, Google, IBM, and Microsoft, are racially and gender-biased. As Buolamwini explains, "These algorithms performed better on the male faces in the benchmark than on the female faces. They performed significantly better on the lighter faces than on the darker faces. Data's what we're using to teach machines how to learn different kinds of patterns. So if you have largely skewed data sets that are being used to train these systems, you can also have skewed results. AI is based on data, and data is a reflection of our history. So the past dwells within our algorithms."

Around the world, these tools are being deployed, raising concerns about mass surveillance and incorrectly identifying people, especially those of color. In China, if a citizen wants to get internet service, they have to submit to facial recognition. All of this data gives them permission to do certain things and deny others. In one scene, protesters against mass surveillance are seen spray painting, destroying, and shining lasers at the cameras.

In the United Kingdom, police are seen with cameras above their vans scanning people's faces to identify them. In the documentary, a black 14-year-old boy is seen being pulled over by four plainclothes officers to confirm his identity after the cameras match him with a person of interest. They questioned and fingerprinted him, then let him go after discovering it was a false match. 

Risk Assessments in Education  An award-winning and beloved teacher encounters injustice with an automated assessment tool, exposing the risk of relying on artificial intelligence to judge human excellence. The teacher in question received several awards throughout his career to celebrate the work he had done as an educator and community leader. However, he was terminated from his job after an AI assessment of his job determined that he was not meeting the standards despite his students' excelling in academics.

Employment and Housing Biases  Amazon engineers decided they would use AI to sort through resumes for hiring. However, it was found that the program was biased against women, rejecting all resumes from them. The tool searches for the names of women's colleges and sports and automatically rejects the job application if they are found within the resume.

A building management company in Brooklyn planned to implement facial recognition software for tenants to use to enter their homes, but the majority black and brown community worked to fight back with the help of Joy Boulamwini. 

Results of the AJL Findings Joy Boulamwini brought the research on facial recognition of black and white men and women to IBM, and they took this information, ran their own research to confirm these findings, and made improvements to their AI system.

On June 10th, 2020 Amazon announced a one-year pause on police use of its facial recognition technology.

On June 25th, 2020 U.S. lawmakers introduced legislation to end federal use of facial technology. There is still no U.S. federal regulation of algorithms.

Take Action The Algorithmic Justice League believes in exposing AI harm and biases for change and not for shame. In response to their research, many companies have taken their Safe Face Pledge and made significant improvements to their guidelines and processes. So, if you’re aware (as an employee, creator, or consumer) of AI harms or biases, they want to hear from you. Click here to share your story.

Allies of the Algorithmic Justice League are encouraged to...

Cases of Algorithmic Bias' Ethical Dilemmas

Algorithmic Bias

Machine learning systems are created by people and trained on data selected by those people. The machine then learns patterns from that data and makes judgments based on the information provided to it. However, whatever the machine is not exposed to creates a blind spot.

Microsoft's Tay AI Chatbot

In 2016, Microsoft launched Tay, an AI chatbot designed to engage with people aged 18 to 24 on Twitter. Tay was programmed to learn from other users on Twitter and mimic the behavior of a young woman. Tay's first words were, "hellooooooo world!!!" (The "o" in "world" was a planet earth emoji for added whimsy. After 12 hours, Tay began to make racist and offensive comments, such as denying the Holocaust, saying feminists "should all die and burn in hell", and that actor "Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism." [2] Tay was quickly shut down and Microsoft released an apology for Tay's actions: : Learning from Tay’s introduction

Peter Lee, Corporate Vice President, Microsoft Healthcare, stated: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as social as they are technical. We will do everything possible to limit technical exploits, but we also know that we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people, often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity." [3]

The Algorithmic Justice League values equitable and accountable AI. Peter Lee took the appropriate steps to hold Tay accountable and planned ways for Microsoft to continue to improve their products in a manner where people are not harmed by artificial intelligence in the ways that Tay quickly learned to make disrespectful and offensive comments. 

GorillasTwitter.png

Google Photos - "Gorillas"

In 2015, Google Photos user Jackie Alciné found that Google Photos had tagged photos of him and his girlfriend as gorillas. The issue here was that its algorithm was not trained with enough images of darker-skinned people to be able to identify them as what they are: people. "Many large technology companies have started to say publicly that they understand the importance of diversity, specifically in development teams, to keep algorithmic bias at bay. After Jacky Alcine publicized Google Photo tagging him as a gorilla, Yonatan Zunger, Google’s chief social architect and head of the infrastructure for Google Assistant, tweeted that Google was quickly putting a team together to address the issue and noted the importance of having people from a range of backgrounds to head off these kinds of problems." [4] Google attempted to fix the algorithm but ultimately decided to remove the gorilla label. 

In past comments to the Office of Science and Technology Policy at the White House, Google listed diversity in the machine learning community as one of its top three priorities for the field: "Machine learning can produce benefits that should be broadly shared throughout society. Having people from a variety of perspectives, backgrounds, and experiences working on and developing the technology will help us to identify potential issues." [5] Yonatan Zunger noted that the company is working on longer-term fixes that revolve around which labels could be problematic for their users and better recognition of dark-skinned faces when programming these machine learning programs.

Law Enforcement Risk Assessment

COMPAS

Brisha Borden was rated at high risk for future crime after she and a friend took a kid’s bike and a scooter that was sitting outside. She did not reoffend.

Law enforcement across the United States utilizes machine learning to generate risk scores to determine prison sentences. Tim Brennan, a professor of statistics at the University of Colorado, and David Wells, who ran a corrections program in Traverse City, Michigan, created the Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS. [6]. "It assesses not just risk but also nearly two dozen so-called "criminogenic needs" that relate to the major theories of criminality, including "criminal personality," "social isolation," "substance abuse," and "residence/stability." Defendants are ranked low, medium, or high risk in each category." [7].

The criminal justice system is already riddled with racial injustice, and biased algorithms are highlighting this.

Sentencing

These scores are common in courtrooms. "They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about the defendant’s freedom. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington, and Wisconsin, the results of such assessments are given to judges during criminal sentencing." [8]. This tool was used to assess two cases, one where an 18-year-old black woman and a friend took a bike and a scooter, and the other was a 41-year-old white man who repeatedly shoplifted and was caught again. The woman was given a high-risk score of 8, and the man was given a low-risk score of 3. This was a clear case of algorithmic bias in the sense that she had never committed a crime before but received a higher risk assessment score. However, the man had been a repeat offender since he was a juvenile and received a lower score.

Unreliable Scoring

These charts show that scores for white defendants were skewed toward lower-risk categories. Scores for black defendants were not. (Source: ProPublica analysis of data from Broward County, Fla.)

In 2014, then U.S. Attorney General Eric Holder warned that the risk scores might be injecting bias into the courts. He called for the U.S. Sentencing Commission to study their use. "Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice," he said, adding, "they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society." [9] The algorithm was found to be unreliable when used to determine the risk scores of about 7,000 people arrested in Broward County, Florida to see if they would commit another crime. Only 20% of people who are predicted to commit a crime actually do so. White defendants were mislabeled as low-risk more often than black defendants. [10]

Northpointe’s assessment tool correctly predicts recidivism 61 percent of the time. But blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend. It makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower risk but go on to commit other crimes.

For a tool to be used that would determine the fate of people, it should be more accurate when determining if an offender will commit another crime in the future. The United States disproportionately locks up people of color, especially black people, in prisons compared to their white counterparts. This technology needs to be reevaluated so that offenders are not being given incorrect risk assessment scores, as what happened to the 18-year-old woman, Brisha Borden, mentioned earlier.

Healthcare Biases

Training Machine Learning Algorithms

When deep-learning systems are used to determine health issues such as breast cancer, engineers are tasked with providing the machine with data, including images and the resulting diagnosis from those images, to determine if the patient has cancer or not. "Datasets collected in North America are purely reflective and lead to lower performance in different parts of Africa and Asia, and vice versa, as certain genetic conditions are more common in certain groups than others," says Alexander Wong, co-founder and chief scientist at DarwinAI. [11] This is due to the lack of training these machine learning algorithms have with skin tones other than those of white or fairer-skinned people.

Utilizing Open Source Repositories

Melanoma from ISIC

A study from Germany that tested machine learning software in dermatology found deep-learning Convolutional Neural Networks, or CNN, outperformed the 58 dermatologists in the study when it came to identifying skin cancers. [12] CNN used data taken from the International Skin Imaging Collaboration, or ISIC, an open-source repository of thousands of skin images for machine learning algorithms. 

Digital images of skin lesions can be used to educate professionals and the public on melanoma recognition as well as provide direct aid in the diagnosis of melanoma through teledermatology, clinical decision support, and automated diagnosis. Currently, a lack of standards for dermatologic imaging undermines the quality and usefulness of skin lesion imaging. [13] If configured with the correct information, these machines could help save the lives of many people. Over 9,000 Americans die of melanoma each year. The need to improve the efficiency, effectiveness, and accuracy of melanoma diagnosis is clear. The personal and financial costs of failing to diagnose melanoma early are considerable. [14]


References

  1. “Mission, Team and Story - The Algorithmic Justice League.” Mission, Team and Story - The Algorithmic Justice League, www.ajlunited.org/about.
  2. Garcia, Megan. "Racist in the Machine: The Disturbing Implications of Algorithmic Bias." World Policy Journal, vol. 33 no. 4, 2016, p. 111-117. Project MUSE muse.jhu.edu/article/645268.
  3. Lee, Peter. “Learning from Tay's Introduction.” The Official Microsoft Blog, 25 Mar. 2016, blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/.
  4. Garcia, Megan. "Racist in the Machine: The Disturbing Implications of Algorithmic Bias." World Policy Journal, vol. 33 no. 4, 2016, p. 111-117. Project MUSE muse.jhu.edu/article/645268.
  5. Garcia, Megan. "Racist in the Machine: The Disturbing Implications of Algorithmic Bias." World Policy Journal, vol. 33 no. 4, 2016, p. 111-117. Project MUSE muse.jhu.edu/article/645268.
  6. Angwin, Julia, et al. “Machine Bias.” ProPublica, 9 Mar. 2019, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  7. Angwin, Julia, et al. “Machine Bias.” ProPublica, 9 Mar. 2019, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  8. Angwin, Julia, et al. “Machine Bias.” ProPublica, 9 Mar. 2019, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  9. Angwin, Julia, et al. “Machine Bias.” ProPublica, 9 Mar. 2019, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  10. Angwin, Julia, et al. “Machine Bias.” ProPublica, 9 Mar. 2019, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  11. Dickson, Ben. “Healthcare Algorithms Are Biased, and the Results Can Be Deadly.” PCMAG, PCMag, 23 Jan. 2020, www.pcmag.com/opinions/healthcare-algorithms-are-biased-and-the-results-can-be-deadly.
  12. Haenssle, Holger A., et al. "Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists." Annals of Oncology 29.8 (2018): 1836-1842.
  13. “ISIC Archive.” ISIC Archive, www.isic-archive.com/#!/topWithHeader/tightContentTop/about/isicArchive.
  14. “ISIC Archive.” ISIC Archive, www.isic-archive.com/#!/topWithHeader/tightContentTop/about/isicArchive.