Healthcare analytics refers to the systematic use of health data and related business insights developed through the application of data analytics to drive fact-based decision-making for planning, management, measurement, and learning in healthcare. In the context of healthcare, it can be used to identify high-risk patients and prescribe treatment, reducing unnecessary hospitalizations or readmissions. Researchers have also developed analytical models to predict future patient behavior based on past behavior. These model provides accurate predictions of no-show patients and assists clinics in developing operational mitigation strategies such as overbooking appointment slots. Such models can even be used to generate unique, optimal solutions for clinical planning and scheduling decisions to improve patient service at hospitals.
Predictive analytics has also been used to study Parkinson's disease. Additionally, some support its use for creating models that would be able to predict which people are at a higher risk of developing chronic diseases, so as to identify such diseases earlier on.
In the field of human resources, predictive analytics can be used to forecast openings within companies and to predict which employees may be a liability. The use of data analytics in human resources has seen a recent surge in popularity, as companies use data analytics to design, evaluate, and implement new management policies; this also means that the traditional methods of using experience, intuition, and guesswork to guide human resources strategy are falling to the wayside.
A study conducted in 2019 on 4,800 individuals across companies in a variety of industries determined that roughly one-quarter to one-third of all companies used predictive analytics in human resources. The study also found that the industry that used predictive analytics in human resources the most is the financial services sector, with 32% of companies applying analytics. Technology (software), oil and energy, and healthcare and pharmaceuticals, all had over 25% of companies in such industries applying analytics to human resources.
In law enforcement, predictive modeling techniques, referred to as "PredPol", have been used by the Los Angeles Police Department (LAPD). Officers at this department state that it is used as a supplementary tool, rather than a replacement for their normal rotations. Additionally, the PredPol system makes predictions solely based on the reported crimes and not on the demographic or identifying information of individuals involved in such crimes, to reduce potential biases. However, in 2020, the LAPD decided to stop using the controversial program.
In finance, stock options are the right, but not the obligation to buy or sell stocks at an agreed price on or before a particular date. Individuals choose between buying shares at an agreed-upon price on or before an expiration date, which is called a "call option" or they can sell shares at an agreed-upon price on or before an expiration date, which is called a "put option." For example, if you buy a call option, at the price or "strike price," of $120 that expires in 30 days, then that means that owning this call option will allow you to purchase stock at $120 per share (strike price) anytime within the next 30 days (expiration date) no matter where the stock price is at. So, if the price has gone up to $135 within the next 30 days, then you are allowed to buy the stock at $120, even though it is trading at $135. A put option is the inverse of this "contract" in that whichever price you sign up to pay, if the price goes down, then you are allowed to sell your stock at the original higher price before the expiration date.
Predictive analytics is used in the path integral approach to financial modeling and options pricing by utilizing algorithms that generate Gaussian path integrals to represent the transition probability density used for the prediction of positive and/or negative options pricing slopes. This approach can theoretically optimize returns on investments if trained on sufficient and accurate data. These methods are derived from random procedures, such as the Monte Carlo simulation, and are designed to mimic the numerical entropy that is natural to the stock market. 
Machine LearningMachine learning is a very good example of predictive analytics.  A good example of machine learning is a recommended page. At first, the recommended page may not be tailored to a person’s liking. As they use their computer more, the algorithm gets a better idea of the person’s interests and recommends things they are more interested in. There are many types of machine learning that include and exclude human supervision. This allows machine learning algorithms to be applied in a wide range of situations. Some examples of its application are digital assistants learning the user’s voice, Chatbots that interpret text and provide suitable responses, and self–driving cars. 
Bias and Discrimination
Ever since the rise of the computer gaming industry brought back the resurgence of neural networks, experts have argued that deep learning is a highly effective way to train an artificial intelligence system. Designed to mimic the way a human brain thinks and makes decisions, a network of thousands or millions of individual processing nodes are connected together in a neural net, enabling an algorithm to train itself to perform a task when given a prepared training data set. However, according to Barocas and Selbst from Cornell University and UCLA respectively, “an algorithm is only as good as the data it works with". Zarsky, a professor and vice dean at Haifa University, argues that algorithms trained on biased data sets will not only inherit pre-existing biases but also generate novel patterns of unfair bias and discrimination, reinforcing these patterns in their decision-making processes. An algorithm may even interpret inequalities in historical data as sensible patterns, which further reinforces existing societal biases. Bias and discrimination in the programmer can also affect if the algorithm itself is biased. Algorithms are built to execute the code of the programmer. Therefore, when a programmer writes an algorithm that has biased ideologies on how to read data and prioritize what is important and not, it affects how the program will run. Another way algorithms can be biased is how the human brain works. The human brain loves to fill in the gaps when there is a lack of context. When algorithms inherit this property it can lead to missing key patterns the algorithm was designed to find.  These unintended consequences when applied in situations like driver safety in self-driving cars and sensitive government areas like risk assessment in the criminal justice system, may cause far reaching complications. For example, a woman named Elaine Herzberg was struck and killed by a self-driving car. The car miss identified the woman as a car until too late. The computer was not allowed to take evasive measures and handed over the control to manual override. The driver wasn't paying attention and the woman was hit.  Another example is of inmates in Broward County, Florida where over 18,000 inmates were given a risk assessment value that potentially had racist bias over-valuing the risk of African Americans.  Detecting and addressing unfair bias and discrimination in algorithms for predictive analytics is particularly difficult as it often occurs due to unintended consequences from the algorithm's use, and not the purposeful actions of an ill-intentioned programmer.
Some assert that transparency as an ethical issue is in opposition to other ethical interests such as privacy. Algorithmic transparency means that the algorithm should have its details accessible and also comprehensible to humans analyzing them; accessible information that is not decipherable is not useful. Most modern, sophisticated artificial intelligence systems are trained via deep learning, using extensive neural networks reaching up to fifty layers deep. As each layer adds complexity, Sloan and Warner say that the human comprehensibility of these networks is affected and thus their transparency too.
The term “predictive privacy” refers to the ethical challenges posed by the ability of algorithms to predict sensitive information about an individual using information derived from data sets of other individuals. In 2019, the Electronic Privacy Information Center (EPIC) raised this ethical concern in their official complaint to the Federal Trade Commission (FTC) against HireVue, a recruiting-technology company, stating that “the company’s use of unproven artificial-intelligence systems that scan people’s faces and voices [constitutes] a wide-scale threat to American workers". Mühlhoff’s definition of a violation of predictive privacy is “if sensitive information about [a] person or group is predicted against their will or without their knowledge on the basis of data of many other individuals, provided that these predictions lead to decisions that affect anyone’s...freedom". However, predictive privacy can still be violated regardless of the prediction’s accuracy, especially when systems for data collection and processing are designed such that subjects cannot provide meaningful or informed consent.
- "Predictive Analytics:What It Is & Why It's Important?". Edupristine, 2021, https://www.edupristine.com/blog/importance-of-predictive-analytics.
- Theodoridis, Sergios. Machine Learning : A Bayesian and Optimization Perspective. Elsevier Science & Technology, 2015, doi:10.1016/C2013-0-19102-7.
- Nyce, Charles. "Predictive Analytics White Paper." The Digital Insurer, American Institute for CPCU, 2007, www.the-digital-insurer.com/wp-content/uploads/2013/12/78-Predictive-Modeling-White-Paper.pdf.
- Shah, Nilay D, et al. “Big Data and Predictive Analytics: Recalibrating Expectations.” JAMA : The Journal of the American Medical Association, vol. 320, no. 1, American Medical Association, 2018, pp. 27–28, doi:10.1001/jama.2018.5602.
- Nathan (September 2, 2008), "Insurers Shift to Customer-focused Predictive Analytics Technologies", Insurance & Technology, archived from the original on July 22, 2012, retrieved July 2, 2012
- Fletcher, Heather (March 2, 2011), "The 7 Best Uses for Predictive Analytics in Multichannel Marketing", Target Marketing
- Cohen, I. G., et al. "The Legal And Ethical Concerns That Arise From Using Complex Predictive Analytics In Health Care." Health Affairs, vol. 33, no. 7, 2014, pp. 1139-47, doi:10.1377/hlthaff.2014.0048.
- Harwell, Drew. "Rights group files federal complaint against AI-hiring firm HireVue, citing ‘unfair and deceptive practices." Washington Post, 6 November 2019, www.washingtonpost.com/technology/2019/11/06/prominent-rights-group-files-federal-complaint-against-ai-hiring-firm-hirevue-citing-unfair-deceptive-practices.
- Perry, Walter, et al. "Predictive Policing: The Role Of Crime Forecasting In Law Enforcement Operations." RAND Corporation, 2013, doi:10.7249/rr233.
- Mühlhoff, Rainer. "Predictive Privacy: Towards An Applied Ethics Of Data Analytics." SSRN, 2020, doi:10.2139/ssrn.3724185.
- Lynn, John. “Using NLP with Machine Learning for Predictive Analytics in Healthcare”. Healthcare IT Today. December 12, 2016
- Kankanhalli, Atreyi, et al. "Big data and analytics in healthcare: introduction to the special section." Information Systems Frontiers 18.2 (2016): 233-235.
- Harris, Shannon L., Jerrold H. May, and Luis G. Vargas. "Predictive analytics model for healthcare planning and scheduling." European Journal of Operational Research 253.1 (2016): 121-131.
- Dinov, Ivo D., et al. "Predictive Big Data Analytics: A Study of Parkinson’s Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations." PLoS One, vol. 11, no. 8, 5 August 2016, doi:10.1371/journal.pone.0157077.
- "Predictive analytics in healthcare." Foresee Medical, www.foreseemed.com/predictive-analytics-in-healthcare. Accessed 28 March 2021.
- Mishra, Sujeet N., et al. "Human Resource Predictive Analytics (HRPA) for HR Management in Organizations." International Journal of Scientific & Technology Research, vol. 5, no. 5, May 2016, www.ijstr.org/final-print/may2016/Human-Resource-Predictive-Analytics-hrpa-For-Hr-Management-In-Organizations.pdf.
- King, Kylie Goodell. "Data analytics in human resources: A case study and critical review." Human Resource Development Review 15.4 (2016): 487-495.
- Noack, Brent. "Big data analytics in human resource management: Automated decision-making processes, predictive hiring algorithms, and cutting-edge workplace surveillance technologies." Psychosociological Issues in Human Resource Management 7.2 (2019): 37-42.
- Eidam, Eyragon. "The Role of Data Analytics in Predictive Policing." Government Technology, September 2016, www.govtech.com/data/Role-of-Data-Analytics-in-Predictive-Policing.html. Accessed 28 March 2021.
- Miller, L. (2021). LAPD will end controversial program that aimed to predict where crimes would occur. Los Angeles Times. Retrieved 17 April 2021, from https://www.latimes.com/california/story/2020-04-21/lapd-ends-predictive-policing-program.
- Melicher, Ronald and Welshans, Merle (1988). Finance: Introduction to Markets, Institutions & Management (7th ed.). Cincinnatti OBN: Southwestern Publishing Company. p. 2. ISBN 0-538-06160-X.
- Stultz, Russell A. The Options Trading Primer : Using Rules-Based Option Trading to Earn a Steady Income. Business Expert Press, 2019.
- Linetsky, Vadim. “The Path Integral Approach to Financial Modeling and Options Pricing.” Computational Economics, vol. 11, no. 1, Society for Computational Economics, 1998, pp. 129–63.
- Hao, Karen. “What Is Machine Learning?” MIT Technology Review, MIT Technology Review, 5 Apr. 2021, www.technologyreview.com/2018/11/17/103781/what-is-machine-learning-we-drew-you-another-flowchart/.
- By: IBM Cloud Education. “What Is Machine Learning?” IBM, www.ibm.com/cloud/learn/machine-learning.
- Hardesty, Larry. "Explained: Neural Networks." MIT News, 2021, news.mit.edu/2017/explained-neural-networks-deep-learning-0414.
- Barocas, Solon, and Andrew D. Selbst. "Big Data's Disparate Impact." SSRN, 2016, doi:10.2139/ssrn.2477899.
- Zarsky, Tal Z. "An Analytic Challenge: Discrimination Theory in the Age of Predictive Analytics." I/S: A Journal of Law and Policy, vol. 14.1, 2017, pp. 12-35, kb.osu.edu/bitstream/handle/1811/86702/1/ISJLP_V14N1_011.pdf.
- Boyd, D., & Crawford, K. (2012). Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon.” Information, Communication & Society, 15(5), 662-679.
- Smith A (2018) Franken-algorithms: the deadly consequences of unpredictable code. The Guardian. https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger
- Angwin J, Larson J (2016) Machine bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- Canca, Cansu. "Anonymity in the Time of a Pandemic: Privacy vs. Transparency." Bill of Health, Harvard Law, blog.petrieflom.law.harvard.edu/2020/03/30/anonymity-in-the-time-of-a-pandemic-privacy-vs-transparency. Accessed 27 March 2021.
- Mittelstadt, Brent D., et al. "The Ethics Of Algorithms: Mapping The Debate." Big Data & Society, vol. 3, no. 2, 2016, pp. 1-21, SAGE Publications, doi:10.1177/2053951716679679.
- Sloan, Robert H., Richard Warner. "When Is an Algorithm Transparent?: Predictive Analytics, Privacy, and Public Policy." IEEE: Security & Privacy, SSRN, 2017, dx.doi.org/10.2139/ssrn.3051588.
- Schermer, Bart W. "The Limits Of Privacy In Automated Profiling And Data Mining." Computer Law & Security Review, vol. 27, no. 1, 2011, pp. 45-52, doi:10.1016/j.clsr.2010.11.009.