AI and Recidvism

From SI410
Revision as of 17:02, 27 January 2023 by Vanderln (Talk | contribs) (Created page with "'''COMPAS AI: An Insight into Racial Bias Present in Legal Algorithms''' When it comes to criminal justice, the United States incorporates a variety of AI tools to assess cr...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

COMPAS AI: An Insight into Racial Bias Present in Legal Algorithms

When it comes to criminal justice, the United States incorporates a variety of AI tools to assess criminal risk levels. These machine-learning based algorithms use statistical data to find patterns in criminal behavior [1]. These patterns intend to aid police in prioritizing resources as well as predicting future criminal behavior and criminal risk associated with individuals. One particular risk assessment AI tool aims to predict levels of recidivism. Recidivism refers to the tendency of a convicted criminal to commit a future crime post-conviction. It is often used when a criminal defendant is being tried by the courts. It is important for law enforcement to theorize potential risk levels for criminal defendants because incarceration rates remain high, and resources remain low. A criminal deemed less risky by recidivism AI is given more options for routes like community service, parole, and shorter sentencing. On the other hand, a criminal deemed risky by recidivism AI may have fewer options for parole, re-release, or shorter sentencing since it is perceived they will commit future crimes.

This software, COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions [2], was designed to be free of bias. It provided defendants a risk score. This risk score was a scale designed to indicate to judges and law enforcement the likelihood that defendant will commit a future crime within 2 years and included a risk assessment to that crime. For example, a defendant predicted to commit a violent crime by COMPAS will have a higher risk assessment than a defendant predicted to commit a low-level property crime. The machine learning algorithm takes a breadth of data and asses a variety of factors based on this data. The algorithm was built to be free of human bias. Since COMPAS looked at data, it was perceived to be fair and free of human biases.

Predicting levels of recidivism is important to law enforcement due to a variety of outside factors. These factors include, but are not limited to, pressure to lower incarceration rates in the United States and limited resources for law enforcement. Currently, the United States incarcerates at the sixth highest rate in the world [3]. The sheer volume of offenders that need to be processed through the legal system created the demand a quick, unbiased software to help law makers decide which criminals are most dangerous or risky. Predicting recidivism is one of the markers these law makers used to assess risk. As the United States faces political pressure to lower incarceration rates, software like COMPAS aim to prioritize the “riskiest” criminals for incarceration. Additionally, software like COMPAS aim to eliminate human bias associated with predicting recidivism. The logic behind these AI software goes something like this: humans can possess unconscious bias that may unfairly influence how judges predict recidivism. A machine learning AI, on the other hand, can only assess the data.

While the theory behind recidivism predicting AI software is aimed to eliminate bias, the implementation of COMPAS has many limitations that disproportionately hurt certain groups of people. For example, the recidivism AI COMPAS assesses a defendant’s risk of committing either a misdemeanor or felony in the next 2 years from assessment [4]. COMPAS analyzes everything from housing stability, substance use, family income levels, community ties, employment status, and more. Lower levels of factors like housing stability, family income levels, and employment status and income often correlate to race. Because of this correlation, the recidivism AI inaccurately predicted higher recidivism rates among black defendants. “Black defendants who did not recidivate were incorrectly predicted to reoffend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%,” [5]. This insight in to COMPAS exposes the limitations present in recidivism AI. While the software sought to discourage racial bias in criminal sentencing, it furthered the gap in inequitable sentencing particularly between black and white defendants. Software like COMPAS cannot distinguish between correlation and causation. Family income, employment levels, and other factors COMPAS analyzes unfortunately fall under the umbrella of systemic racism. Because of this, the rates of unemployment are higher amongst black populations than white in the United States. While unemployment may correlate to certain types of crimes, such as property crimes, using factors such as this unfairly and inaccurately predict higher rates of recidivism amongst black defendants.

Furthermore, the data available for machine learning algorithms such as COMPAS to analyze is skewed. Years of sentencing inequity disproportionately unjustly deemed black offenders as more violent. The data COMPAS learns from reflects this inequity. In short, the historical data COMPAS learns patterns from is biased, making the algorithm biased as well.

COMPAS discourages fair sentencing because it cannot distinguish truth from correlation. When assessing the data available to it, it unjustly and inaccurately perceives black offenders as riskier than white offenders. This

perpetuates existing racial inequities in prison sentencing and allows sentencing inequity to thrive.
  1. (https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/)
  2. (https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm)
  3. (https://www.statista.com/statistics/262962/countries-with-the-most-prisoners-per-100-000-inhabitants/)
  4. (https://www.science.org/doi/10.1126/sciadv.aao5580)
  5. (https://www.science.org/doi/10.1126/sciadv.aao5580)