Predictive Analytics

From SI410
Revision as of 16:10, 17 March 2021 by Ghchan (Talk | contribs) (Created page with "== <big>'''Predictive Analytics'''</big> == Predictive analytics is the use of algorithms and machine learning techniques to forecast future events in real time. Leveraging v...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Predictive Analytics

Predictive analytics is the use of algorithms and machine learning techniques to forecast future events in real time. Leveraging vast data sets, these algorithms are able to predict potential risks and costs, or even an individual’s future behaviour. Thanks to rapid advancements in technology and the emergence of big data, predictive analytics has seen growing use in various industries, from providing health care treatment recommendations [1], to assessing and determining candidates for hire [2], and even to help police anticipate potential crimes and criminals [3]. As technology continues to improve, data analytics and artificial intelligence will only continue to grow their capabilities and expand their applications. However, with predictive analytics becoming increasingly prevalent in decision-making processes that have direct and potentially life-changing impacts on people’s lives [4], this raises serious ethical concerns regarding algorithmic bias, transparency, and data privacy.

Ethical Challenges

Bias and Discrimination

Ever since the rise of the computer gaming industry brought back the resurgence in neural networks, deep learning has become the most effective way to train an artificial intelligence system [5]. Designed to mimic the way a human brain thinks and makes decisions, a network of thousands or even millions of individual processing nodes are connected together in a neural net, which enables an algorithm to train itself to perform a task given a prepared training data set [6]. However, “an algorithm is only as good as the data it works with. [7]”. If an algorithm is trained on a data set that is inherently biased, not only will the algorithm inherit any pre-existing biases from the data set it was trained upon, but the algorithm may also generate new patterns of unfair bias and discrimination in its decision-making process. An algorithm may even interpret inequalities in historical data as sensible patterns, which in turn only further reinforces the existing biases in our society [7]. Furthermore, detecting and addressing unfair bias and discrimination in algorithms for predictive analytics is particularly difficult, as more often than not, they arise as unintended consequences of the algorithm’s usage rather than as the conscious efforts of an ill-intentioned programmer [7].

Transparency

Transparency is a serious ethical concern, albeit a tricky one, as transparency can directly oppose other concerns, such as privacy. Transparency in algorithms requires both accessibility and comprehensibility of information about the algorithm [8], as an algorithm could be output as much as it wanted, but if none of it can be understood then accessibility to the information is pointless. Most, if not all of today’s most sophisticated artificial-intelligence systems, are trained by deep learning on neural nets that comprise of millions of individual nodes stretching up to 50 layers deep [5]. As each additional layer adds another dimension of complexity to the algorithm, one could imagine how the decision-making process of an algorithm trained on a network with 50 layers may be harder to comprehend than that of one trained on a network with just 5 layers. This makes it difficult to follow how a decision was reached by the algorithm, which not only makes it near impossible to identify and address injustices, but also raises ethical concerns as to whether or not decisions that could significantly impact human lives should be left to algorithms [8].

Predictive Privacy

The term “predictive privacy” refers to the ethical challenges facing both privacy and data protection that are posed by the ability of algorithms to predict sensitive information about an individual using a large data set of other individuals [3]. In 2019, the Electronic Privacy Information Center (EPIC) raised this very ethical concern in their official complaint to the Federal Trade Commission against HireVue, a recruiting-technology company, in which they said that “the company’s use of unproven artificial-intelligence systems that scan people’s faces and voices constituted a wide-scale threat to American workers. [2]”. Mühlhoff’s definition of a violation of predictive privacy is that “if sensitive information about that person or group is predicted against their will or without their knowledge on the basis of data of many other individuals, provided that these predictions lead to decisions that affect anyone’s social, economic, psychological, physical, … well-being or freedom. [3]” Most importantly, predictive privacy can still be violated regardless of the prediction’s accuracy. Any information predicted against one’s will that leads to life-affecting decisions could be considered a violation of predictive privacy, but when systems for data collection and processing are designed such that subjects cannot provide meaningful or informed consent [9], then perhaps predictive privacy needs to be raised more seriously, especially when people’s lives are potentially at stake.



References