Artifical Intelligence in Therapy

From SI410
Jump to: navigation, search
Back • ↑Topics • ↑Categories

Artificial intelligence in therapy is an overarching term used to describe the use of machine-learning algorithms or software to mimic human understanding to assist or replace humans in multiple aspects of therapy.

Artificial intelligence is intelligence demonstrated by machines, based on input data and algorithms alone. Its primary goal is to perceive its environment and take action that maximizes its goals.[1] As such, different from human intelligence, artificial intelligence can sometimes work as a black box, with little reasoning behind its conclusion but accurate nonetheless.

The primary aim of artificial intelligence in therapy is to (1) analyze the relationships between symptoms exhibited by patients and possible diagnosis, and (2) act as a substitute or addition to human therapists due to the current shortage of therapists worldwide. Companies are developing technology through decreasing therapists overloading and better monitoring of patients.

As artificial intelligence use in therapy is still relatively new, some ethical concerns have arisen on the matter.

History

The idea of artificial intelligence stems from the study of mathematical logic and philosophy. The first theory that suggests a machine can simulate any kind of formal reasoning is the Church-Turing thesis, proposed by Alonzo Church and Alan Turing. Since the 1950s, AI researchers explored the idea that any human cognition can be reduced to algorithmic reasoning, and had based research in two main directions. The first is creating artificial neural networks, systems that model the biological brain. The second is developing symbolic AI (also known as GOFAI)[2] systems that are based on human-readable representations of problems solved by logic programming from the 1950s to the 1990s, before shifting into focus on subsymbolic AI due to technical limitations.

The first documented use of artificial intelligence in psychotherapy is the chatbot ELIZA[3], developed from 1964 to 1966 by Joseph Weizenbaum. Eliza is created to be a pseudo therapist that simulates human conversations using pattern matching techniques but has no framework in contextualizing any input. ELIZA was written in SLIP and trained on primarily the DOCTOR script that simulated interactions Carl Rogers has with his patients — notably repeating what the patient has said back at them. While ELIZA had been primarily developed to highlight the superficial interactions between AI and humans and was not aimed to perform recommendations to patients, Weizenbaum observed that many did believe the robot understood them[4]. Subsequent chatbots, such as PARRY, simulating a patient with natural schizophrenia, are also successful. Computer-to-computer therapeutic interactions were also observed, with ELIZA acting as a therapist to PARRY.

In the 1980s, psychotherapists started to investigate the usage of artificial intelligence clinically[5], primarily highlighting the possibilities of using logic based programming in quick intervention methods, such as in brief cognitive and behavioral therapy (CBT). This kind of therapy does not focus on the underlying causes of mental health ailments, rather, it triggers and supports a change in behavior and cognitive distortions[6]. However, technical limitations such as the lack of sophisticated and development of logical systems, and the lack of breakthrough in artificial intelligence technology, as well as the decrease in funding in AI technology by the Strategic Computing Initiative had led research in this field to stagnate until the mid 1990s when the internet became accessible to the general public. Currently, artificial intelligence is becoming increasingly widespread in psychotherapy, with developments focusing on data logging and building mental models of patients.

four.

[7]

development

examples

applications

chatbots

self-guided treatments

therapeutic robots

ethical concerns

The ELIZA effect

crisis management of artificial intelligence

data collection

limitations

see also

references

  1. Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press. ISBN 978-0-19-510270-3. Archived from the original on 26 July 2020. Retrieved 24 Jan 2022.
  2. Williams, M., & Haugeland, J. (1987). Artificial Intelligence: The Very Idea. Technology and Culture, 28(3), 706. https://doi.org/10.2307/3105016
  3. Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
  4. Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
  5. Glomann, L., Hager, V., Lukas, C. A., & Berking, M. (2018). Patient-Centered Design of an e-Mental Health App. Advances in Intelligent Systems and Computing, 264–271. https://doi.org/10.1007/978-3-319-94229-2_25
  6. Benjamin, C. L., Puleo, C. M., Settipani, C. A., Brodman, D. M., Edmunds, J. M., Cummings, C. M., & Kendall, P. C. (2011). History of Cognitive-Behavioral Therapy in Youth. Child and Adolescent Psychiatric Clinics of North America, 20(2), 179–189. https://doi.org/10.1016/j.chc.2011.01.011
  7. things that go into citations