Automated Decision Making in Child Protection

From SI410
Revision as of 12:32, 25 March 2021 by Nfigue (Talk | contribs)

Jump to: navigation, search

Automated decision-making regarding child protection refers to the use of software which attempts to quantify the future possibility of harm for a child in a given context. This type of program is used in unison with case workers as a method of screening calls placed about possible domestic violence situations to determine the risk level of a child involved. [1] Child protection software is a subset of automated government decision-making, a movement occurring in governments around the world to improve efficiency and consistency in processes. [2]

Functionality

Child protection decision-making software is an application of predictive analytics. [1] The goal for these systems is to apply various forms of trend identification in order to predict the likelihood of an event occurring in the future which could endanger a child. Simple algorithms have been used in child protection agencies for some time, but recently a trend has appeared of trying to make algorithms as consistent and free from bias as possible. New predictive tools take in vast amounts of administrative data and weigh each variable in order to identify situations that warrant intervention. [3]

Although child protection algorithms vary based on development and implementation, the logic (rules determining program output) comes from two general sources: rules that are pre-programmed by a professional on the decision being made, or machine learning, where rules are inferred from previous data. Pre-programmed rules lead to consistent and repeatable results while machine learning implementations of predictive analytics can provide more insight into a given data set. Supervised machine learning is common in applications like child protection, where historic data must be classified before it can be analyzed. [2]

In Practice

In August of 2016, Pennsylvania’s Allegheny County became the first in the United States to use an automated decision making software to supplement its existing child abuse hotline screening process. The algorithm offers a “second opinion” to the call operator’s initial verdict, giving them the opportunity to change their mind over whether or not to flag the call as worthy of an investigation. [4]

Morality Concerns

There exist at least 3 major ethical concerns in the realm of applying predictive analytics to child protection. [5] The first is a general ethical issue regarding algorithms that perform any sort of machine learning: algorithmic transparency, wherein the program can produce results that are not reverse-engineerable due to a lack of information about the algorithm. This lack of details about inner workings can be due to proprietary software or the use of machine learning, in which results can be produced without specific steps on how to reproduce them.

Another large concern in regards to predictive learning, especially in the context of child protection, is the rarity of an event which the program attempts to predict. Critical incidents where the abuse to a child could be fatal are near 2% of all events, which leads to an ethical implication of preferring false positives or false negatives. An algorithm will never be perfect, which leads to the moral question of whether software should prefer false positives at the cost of resources or false negatives at the cost of child safety. [5]

The last ethical issue is also specific to child protection, although it can appear in other applications of predictive analytics. Reducing a child to a numeric risk score, as most utilized programs do, leads to an oversimplification of a complex social scenario. Predictive algorithms can input an enormous number of variables to make connections, but ultimately cannot understand the incalculable interactions that many child abuse victims will face. [5]

Human Bias

All algorithms, regardless of logic, are ultimately created by humans. The presence of human input raises ethical concerns of cultural and societal bias. Although algorithms that feature machine learning interpret previous results and user input independently of direct manipulation by a programmer, its pre-programmed rules are still implemented by a programmer or a team of programmers. It is widely argued that algorithms are, however unintentionally, intrinsically encumbered with the values of their creators. [6]

The primary ethical concern regarding the biases incorporated into algorithms is that they are difficult to find until a problematic result arises, and in the case of preventing children from experiencing abuse, these biases pose a very serious threat. In one study, it was found that a child’s race was taken into account as a factor in determining the likelihood of them being abused. These types of flawed parameters resulted in some automated decision making software resulting in false positives up to 96% of the time. [7]

Ethical Standards

Given the high stakes involved with child protection decisions, agencies utilizing these technologies have taken measures in attempts to make their processes ethically sound.

Predictive models make decisions that can greatly impact a person’s life, but the results can be drastically different depending on the specific context and input. For this reason, there are widely accepted standards in place to determine the efficacy of a given algorithm at predicting child endangerment. [1]

Validity

Validity of a predictive model is the measure of whether an algorithm is measuring what it is meant to measure. [1] This is calculated as the number of true results (true positives and true negatives) over the total samples analyzed, resulting in higher values for more accurate algorithms.

Equity

An important facet of any automated government decision making is the equal treatment of cases across any demographic. In this regard, equity is the measure of variance in risk calculation across major geographic, racial, and ethnic groups in order to determine the applicability of an algorithm. [1]

Reliability

Reliability is a measure of consistency of results from different users of a predictive model when provided with the same information. This is important to social service agencies when attempting to maintain a policy among case workers and offices. [1]

Usefulness

Usefulness is the measure of how applicable the results of an algorithm are to a case that is being investigated. Although this is not easily measured numerically, the following guidelines exist to help categorize an algorithm as useful: “When no potential exists for a particular predictive model to impact practice, improve systems or advance the well-being of children and families, then that model is inadequate.” [1]


Human Use

A graphic illustration of Automation Bias

Even the most powerful computer with a large quantity of data and a precise algorithm cannot be perfect, where a human case worker with experience must fill in the gaps.

An issue posed by human use of these algorithms is “automation bias,” the false conception that algorithms are neutral and inherently completely accurate. This phenomenon can result in case workers becoming over reliant on automated decision making software, overlooking any flaws present in the algorithmic process and not making an effort to verify the results through other means. [8]

Predictive analysis can be an ally to health care professionals who use the insight to inform their decision rather than completely depend on it, as the software is not capable of understanding the specific context nor the intricacies of human interaction to the point that it will always be 100% accurate. [1] [9]

Future of Automated Decision Making

References

{{{{{1}}}{{{2}}}|Pdf|Doc}}
  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Jesse Russell, Predictive analytics and child protection: Constraints and opportunities, Child Abuse & Neglect, Volume 46, 2015, Pages 182-189, ISSN 0145-2134, https://doi.org/10.1016/j.chiabu.2015.05.022. (https://www.sciencedirect.com/science/article/pii/S0145213415002197)
  2. 2.0 2.1 TY - JOUR AU - Zalnieriute, Monika AU - Moses, Lyria Bennett AU - Williams, George TI - The Rule of Law and Automation of Government Decision-Making JO - The Modern Law Review JA - The Modern Law Review VL - 82 IS - 3 SN - 0026-7961 UR - https://doi.org/10.1111/1468-2230.12412 DO - https://doi.org/10.1111/1468-2230.12412 SP - 425 EP - 455 PY - 2019
  3. Keddell E. Algorithmic Justice in Child Protection: Statistical Fairness, Social Justice and the Implications for Practice. Social Sciences. 2019; 8(10):281. https://doi.org/10.3390/socsci8100281
  4. Dan H. “Can An Algorithm Tell When Kids Are in Danger?” (2018) https://www.nytimes.com/2018/01/02/magazine/can-an-algorithm-tell-when-kids-are-in-danger.html
  5. 5.0 5.1 5.2 Church, C.E. and Fairchild, A.J. (2017), In Search of a Silver Bullet: Child Welfare's Embrace of Predictive Analytics. Juv Fam Court J, 68: 67-81. https://doi-org.proxy.lib.umich.edu/10.1111/jfcj.12086
  6. Luciano F. The ethics of algorithms: Mapping the debate, Big Data & Society (2016) Pages 1-2, ISSN: 20539517, https://us.sagepub.com/en-us/nam/journal/big-data-society
  7. Quantzig. Predictive Analytics and Child Protective Services (2017) https://www.quantzig.com/blog/predictive-analytics-child-protective-services
  8. Stephanie G. Coding Over the Cracks: Predictive Analytics and Child Protection, Fordham Urban Law Journal, Volume 46 Article 3, 2019, Page 355 https://ir.lawnet.fordham.edu/ulj/vol46/iss2/3
  9. Vaithianathan, R., & Putnam-Hornstein, E. (2017). Developing Predictive Models to Support Child Maltreatment Hotline ScreeningDecisions: Allegheny County Methodology and Implementation. Centre for Social Data Analytics, 48-56.