Automated Decision Making in Child Protection

From SI410
Revision as of 23:32, 11 March 2021 by Nfigue (Talk | contribs) (Ethical Efforts)

Jump to: navigation, search

Automated decision-making regarding child protection refers to the use of software which attempts to quantify the future possibility of harm for a child in a given context. This type of program is used in unison with case workers as a method of screening calls placed about possible domestic violence situations to determine the risk level of a child involved. [1] Child protection software is a subset of automated government decision-making, a movement occurring in governments around the world to improve efficiency and consistency in processes. [2]

Functionality

Child protection decision-making software is an application of predictive analytics. [1] The goal for these systems is to apply various forms of trend identification in order to predict the likelihood of an event occurring in the future which could endanger a child. Simple algorithms have been used in child protection agencies for some time, but recently a trend has appeared of trying to make algorithms as consistent and free from bias as possible. New predictive tools take in vast amounts of administrative data and weigh each variable in order to identify situations that warrant intervention. [3]


Although child protection algorithms vary based on development and implementation contexts, the logic (rules determining program output) comes from two general sources: rules that are pre-programmed by a professional on the decision being made, or machine learning, where rules are inferred from previous data. Pre-programmed rules lead to consistent and repeatable results while machine learning implementations of predictive analytics can provide more insight into a given data set. Supervised machine learning is common in applications like child protection, where historic data must be classified where it can be analyzed. [2]

Morality Concerns

There exist at least 3 major ethical concerns in the realm of applying predictive analytics to child protection. [4] The first is a general ethical issue regarding algorithms that perform any sort of machine learning: algorithmic transparency, wherein the program can produce results that are not reverse-engineerable due to a lack of information about the algorithm. This lack of details about inner workings can be due to proprietary software or the use of machine learning, in which results can be produced without specific steps on how to reproduce them.

Another large concern relating to predictive learning, especially in the context of child protection, is the rarity of an event which the program attempts to predict. Critical incidents where the abuse to a child could be fatal are near 2% of all events, which leads to an ethical implication of preferring false positives or false negatives. An algorithm will never be perfect which leads to the moral question of whether software should prefer false positives at the cost of resources, or false negatives at the cost of child safety. [4]

The last ethical issue is also specific to child protection, although it can be appear in other applications of predictive analytics. Reducing a child to a numeric risk score, as most utilized programs do, leads to an oversimplification of a complex social scenario. Predictive algorithms can input an enormous number of variables to make connections, but ultimately cannot understand the social intricacies that many child abuse cases will face. [4]

Ethical Efforts

Given the high stakes involved with child protection decisions, agencies utilizing these technologies have taken measures in attempts to make their processes are ethically sound.

Standards

Predictive models can be drastically different depending on the social context and make decisions that can greatly impact a person's life. For this reason, there are widely accepted standards in place to determine the efficacy of a given algorithm at predicting child endangerment. [1]

Validity

Validity of a predictive model is the measure of whether an algorithm is measuring what it is meant to measure. [1] This is calculated as the number of true results (true positives and true negatives) over the total samples analyzed, resulting in higher values for more accurate algorithms.

Equity

An important facet of any automated government decision making is the equal treatment of cases across any demographic. In this regard, equity is the measure of variance in risk calculation across major geographic, race and ethnic groups in order to determine the applicability of an algorithm. [1]

Reliability

Reliability is a measure of consistency of results from different users of a predictive model when provided with the same information. This is important to social service agencies when attempting to maintain a policy among case workers and offices [1]

Usefulness

Usefulness is the measure of how applicable the results of an algorithm are to a case that is being investigated. Although this is not easily measured numerically, the following guidelines exist to help categorize an algorithm as useful: “When no potential exists for a particular predictive model to impact practice, improve systems or advance the well-being of children and families, then that model is inadequate” [1]


Human Element

Even the most powerful computer with a large quantity of data and a precise algorithm cannot be perfect, where a human case worker with experience must fill in the gaps. Predictive analysis is an ally to health care professionals who can use the insight to inform their decision as the software is not capable of understanding the context nor the intricacies of human interaction to be the final word. [1] [5]

References

{{{{{1}}}{{{2}}}|Pdf|Doc}}
  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Jesse Russell, Predictive analytics and child protection: Constraints and opportunities, Child Abuse & Neglect, Volume 46, 2015, Pages 182-189, ISSN 0145-2134, https://doi.org/10.1016/j.chiabu.2015.05.022. (https://www.sciencedirect.com/science/article/pii/S0145213415002197)
  2. 2.0 2.1 TY - JOUR AU - Zalnieriute, Monika AU - Moses, Lyria Bennett AU - Williams, George TI - The Rule of Law and Automation of Government Decision-Making JO - The Modern Law Review JA - The Modern Law Review VL - 82 IS - 3 SN - 0026-7961 UR - https://doi.org/10.1111/1468-2230.12412 DO - https://doi.org/10.1111/1468-2230.12412 SP - 425 EP - 455 PY - 2019
  3. Keddell E. Algorithmic Justice in Child Protection: Statistical Fairness, Social Justice and the Implications for Practice. Social Sciences. 2019; 8(10):281. https://doi.org/10.3390/socsci8100281
  4. 4.0 4.1 4.2 Church, C.E. and Fairchild, A.J. (2017), In Search of a Silver Bullet: Child Welfare's Embrace of Predictive Analytics. Juv Fam Court J, 68: 67-81. https://doi-org.proxy.lib.umich.edu/10.1111/jfcj.12086
  5. Vaithianathan, R., & Putnam-Hornstein, E. (2017). Developing Predictive Models to Support Child Maltreatment Hotline ScreeningDecisions: Allegheny County Methodology and Implementation. Centre for Social Data Analytics, 48-56.