Automated Decision Making in Child Protection

From SI410
Revision as of 13:33, 2 April 2021 by Alyroach (Talk | contribs)

Jump to: navigation, search

Automated decision-making regarding child protection refers to the use of software that attempts to quantify the future possibility of harm for a child in a given context. This type of program is used in unison with caseworkers as a method of screening calls placed about possible domestic violence situations to determine the risk level of a child involved. [1] This type of system has been deployed in one notable instance in Allegheny County, Pennsylvania, with some mixed results.[2] Child protection predictive analytics carries some ethical concerns, some of which are unique to this specific application. [1]

Functionality

Child protection decision-making software is an application of predictive analytics. [1] The systems' goal is to apply various forms of trend identification to predict the likelihood of an event occurring in the future that could endanger a child. Child protection agencies have used simple forms of these algorithms for some time, but recently a trend has appeared to make algorithms as consistent and free from bias as possible. New predictive tools take in vast amounts of administrative data and weigh each variable to identify situations that warrant intervention. [3]

Although child protection algorithms vary based on development and implementation, the logic (rules determining program output) comes from two general sources. The first is rules for decision-making pre-programmed by a developer, and the second is machine learning, where the program defines new rules from collected data. Pre-programmed rules lead to consistent and repeatable results, while machine learning implementations of predictive analytics can provide more insight into a given data set. Applications like child protection commonly use supervised machine learning, where classification occurs before data analysis. [4]

In Practice

Pennsylvania

DHS goals for implementing the Allegheny Family Screening Tool. Allegheny County Department of Human Services, https://www.alleghenycounty.us/WorkArea/linkit.aspx?LinkIdentifier=id&ItemID=6442467253

In August of 2016, Pennsylvania’s Allegheny County became the first in the United States to use automated decision-making software to supplement its existing child abuse hotline screening process. The algorithm offers a “second opinion” to the call operator’s initial verdict, allowing them to change their mind over whether or not to flag the call as worthy of an investigation. [5]


The Allegheny County Human Services Department's prediction model was found upon evaluation to have implementation mistakes. Researchers discovered an error with how the model's training and testing, specifically instances where one child in two separate referrals showed up in both the test and training sets. Another error occurred in choosing the variables that predicted outcomes. This mistake resulted in an overestimate of the performance results of the model. The research team began rebuilding the model in April of 2017[2].

Britian

In the United Kingdom, many local councils turn to algorithmic profiling to predict child abuse. The algorithms are designed to help take pressure off caseworkers and allow them to focus human resources elsewhere. [6] However, there are several other reasons the councils have turned to technology in recent years. The first is pressure from 2014 media reports of failures within child protection systems. [7] The reports were based on a study that found that human caseworkers in the UK were committing three kinds of errors. Slow revision of initial judgments even in light of new evidence (1), confirmation bias (2), and witnesses bias (for example, doctors carried more weight than neighbors) [8]. The councils believe that the algorithms may provide an unbiased opinion on which cases to investigate first. Another reason is budget cuts to welfare programs are the primary cause of departments turning to algorithms to assist a reduced number of available caseworkers. In some cases, there are no alternatives for high-quality child maltreatment centers. Time and benefit analysis will be the only way to determine if these programs are beneficial, yet this comes at high stakes.

These algorithms collect data on school attendance, housing association repairs, police records on antisocial behavior and domestic violence. Although many believe this data would help the algorithms, critics question using children's sensitive personal data without explicit permission. Data handling is a concern for many citizens in the UK as some believe the Data Protection Act is not equipped to handle data of this magnitude. Most councils are simply passing along alerting information to caseworkers, including school expulsion or domestic violence reports in a child's home. These risk alerts are intended to help caseworkers predict child abuse before it escalates. However, experts wonder if intervention in family life is beneficial in most situations and the risk of false positives impeding a family dynamic. [9]


Morality Concerns

At least three major ethical concerns exist in applying predictive analytics to child protection [10]. The first is a general concern regarding algorithms that perform machine learning, known as algorithmic transpaency. Algorithmic transparency refers to a program producing results that cannot be reverse-engineered due to a lack of information about the algorithm. Missing details about an algorithm's inner workings can result from proprietary software or the nature of machine learning.

Another concern with predictive learning in child protection is the prediction of rare events. Critical incidents where the abuse of a child could be fatal are near 2% of all events, which leads to an ethical dilemma of preferring false positives or negatives. An algorithm will never be perfect, which leads to the moral question of whether software should return false positives at the cost of resources or false negatives at the price of child safety. [10]

The third ethical issue is reducing a child to a numeric risk score can lead to an oversimplification of a complex social scenario. Predictive algorithms can input an enormous number of variables to make connections but cannot understand the human interactions that many child abuse victims will face. [10]

Human Bias

The human creation of algorithms raises ethical concerns of cultural and societal bias. Although algorithms that feature machine learning interpret previous results and user input independently of direct manipulation by a programmer, humans continue to implement pre-programmed rules. Critics argue that algorithms, although unintentional, are intrinsically encumbered with the values of their creators. [11]

The primary ethical concern regarding the biases incorporated into algorithms is that they are difficult to find until a problematic result arises, which could be catastrophic for a child in danger. One study found that a child’s race was used in determining the likelihood of them being abused. These flawed parameters resulted in some automated decision-making software resulting in false positives up to 96% of the time. [12]

Ethical Standards

Given the high stakes involved with child protection decisions, agencies utilizing these technologies have taken measures to make their processes ethically sound.

Predictive models make decisions that can significantly impact a person’s life either positively or negatively, depending on the specific context and input. For this reason, there are widely accepted standards in place to determine the efficacy of a given algorithm at predicting child endangerment. [1]

Validity

The validity of a predictive model is the measure of whether an algorithm is measuring what it is meant to measure. [1] This is calculated as the number of true results (true positives and true negatives) over the total samples analyzed, resulting in higher values for more accurate algorithms.

Equity

An important facet of any automated government decision-making is the equal treatment of cases across any demographic. In this regard, equity is the measure of variance in risk calculation across major geographic, racial, and ethnic groups to determine the applicability of an algorithm. [1]

Reliability

Reliability is a measure of the consistency of results from different users of a predictive model when provided with the same information. This is important to social service agencies when attempting to maintain a policy among caseworkers and offices. [1]

Usefulness

Usefulness is the measure of how applicable the results of an algorithm are to a case that is being investigated. Although this is not easily measured numerically, the following guidelines exist to help categorize an algorithm as useful: “When no potential exists for a particular predictive model to impact practice, improve systems or advance the well-being of children and families, then that model is inadequate.” [1]


Human Use

A graphic illustration of Automation Bias

Even the most powerful computer with a large quantity of data and a precise algorithm cannot be perfect, where a human caseworker with experience must fill in the gaps.

An issue with these algorithms is automation bias, the misconception that algorithms are neutral and inherently accurate. This phenomenon can result in caseworkers becoming over-reliant on automated decision-making software, overlooking any flaws present in the algorithmic process, and not making an effort to verify the results through other means. [13]

Predictive analysis can be an ally to health care professionals who use numerical insight to inform their decision rather than entirely depend on it. The software is not capable of understanding the specific context nor the intricacies of human interaction to the point that it will always be 100% accurate. [1] [14]

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Jesse Russell, Predictive analytics and child protection: Constraints and opportunities, Child Abuse & Neglect, Volume 46, 2015, Pages 182-189, ISSN 0145-2134, https://doi.org/10.1016/j.chiabu.2015.05.022. (https://www.sciencedirect.com/science/article/pii/S0145213415002197)
  2. 2.0 2.1 Chouldechova, A. (2018). A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. Conference on Fairness, Accountability, and Transparency, 81(1). Retrieved from http://proceedings.mlr.press/v81/chouldechova18a/chouldechova18a.pdf
  3. Keddell E. Algorithmic Justice in Child Protection: Statistical Fairness, Social Justice and the Implications for Practice. Social Sciences. 2019; 8(10):281. https://doi.org/10.3390/socsci8100281
  4. TY - JOUR AU - Zalnieriute, Monika AU - Moses, Lyria Bennett AU - Williams, George TI - The Rule of Law and Automation of Government Decision-Making JO - The Modern Law Review JA - The Modern Law Review VL - 82 IS - 3 SN - 0026-7961 UR - https://doi.org/10.1111/1468-2230.12412 DO - https://doi.org/10.1111/1468-2230.12412 SP - 425 EP - 455 PY - 2019
  5. Dan H. “Can An Algorithm Tell When Kids Are in Danger?” (2018) https://www.nytimes.com/2018/01/02/magazine/can-an-algorithm-tell-when-kids-are-in-danger.html
  6. McIntyre, N., & Pegg, D. (2018, September 16). Councils use 377,000 people's data in efforts to Predict child abuse. Retrieved April 02, 2021, from https://www.theguardian.com/society/2018/sep/16/councils-use-377000-peoples-data-in-efforts-to-predict-child-abuse
  7. Keddell, E. (2019, October 8). Algorithmic Justice in Child Protection: Statistical Fairness, Social Justice and the Implications for Practice. Retrieved April 02, 2021, from https://www.mdpi.com/2076-0760/8/10/281
  8. Glaberson, S. K. (2019). Coding Over the Cracks: Predictive Analytics and Child Protection. Retrieved April 02, 2021, from https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=2757&context=ulj
  9. Brown, P., Gilbert, R., Pearson, R., Simmonds, J., Shaw, T., Stein, M., . . . Feder, G. (2018, September 19). Don't trust algorithms to predict child-abuse risk | letters. Retrieved April 02, 2021, from https://www.theguardian.com/technology/2018/sep/19/dont-trust-algorithms-to-predict-child-abuse-risk
  10. 10.0 10.1 10.2 Church, C.E. and Fairchild, A.J. (2017), In Search of a Silver Bullet: Child Welfare's Embrace of Predictive Analytics. Juv Fam Court J, 68: 67-81. https://doi-org.proxy.lib.umich.edu/10.1111/jfcj.12086
  11. Luciano F. The ethics of algorithms: Mapping the debate, Big Data & Society (2016) Pages 1-2, ISSN: 20539517, https://us.sagepub.com/en-us/nam/journal/big-data-society
  12. Quantzig. Predictive Analytics and Child Protective Services (2017) https://www.quantzig.com/blog/predictive-analytics-child-protective-services
  13. Stephanie G. Coding Over the Cracks: Predictive Analytics and Child Protection, Fordham Urban Law Journal, Volume 46 Article 3, 2019, Page 355 https://ir.lawnet.fordham.edu/ulj/vol46/iss2/3
  14. Vaithianathan, R., & Putnam-Hornstein, E. (2017). Developing Predictive Models to Support Child Maltreatment Hotline ScreeningDecisions: Allegheny County Methodology and Implementation. Centre for Social Data Analytics, 48-56.