Difference between revisions of "Chatbots in psychotherapy"

From SI410
Jump to: navigation, search
Line 1: Line 1:
 
{{Nav-Bar|Topics##}}<br>
 
{{Nav-Bar|Topics##}}<br>
<b>Chatbots in therapy</b> is an overarching term used to describe the use of machine-learning algorithms or software to mimic human conversation to assist or replace humans in multiple aspects of therapy.
+
<b>Artificial intelligence in therapy</b> is an overarching term used to describe the use of machine-learning algorithms or software to mimic human understanding to assist or replace humans in multiple aspects of therapy.
  
Artificial intelligence is intelligence demonstrated by machines, based on input data and algorithms alone. Its primary goal is to perceive its environment and take action that maximizes its goals. As such, different from human intelligence, artificial intelligence can sometimes work as a black box, with little reasoning behind its conclusion but accurate nonetheless.
+
Artificial intelligence is intelligence demonstrated by machines, based on input data and algorithms alone. Its primary goal is to perceive its environment and take action that maximizes its goals.<ref>Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press. ISBN 978-0-19-510270-3. Archived from the original on 26 July 2020. Retrieved 24 Jan 2022.</ref> As such, different from human intelligence, artificial intelligence can sometimes work as a black box, with little reasoning behind its conclusion but accurate nonetheless.
  
The primary aim of chatbots in therapy is to (1) analyze the relationships between symptoms exhibited by patients and possible diagnosis, and (2) act as a substitute or addition to human therapists due to the current shortage of therapists worldwide. Companies are developing technology through decreasing therapists overloading and better monitoring of patients.
+
The primary aim of artificial intelligence in therapy is to (1) analyze the relationships between symptoms exhibited by patients and possible diagnosis, and (2) act as a substitute or addition to human therapists due to the current shortage of therapists worldwide. Companies are developing technology through decreasing therapists overloading and better monitoring of patients.
As chatbot use in clinical therapy is still relatively new, some ethical concerns have arisen on the matter.
+
 
 +
As artificial intelligence use in therapy is still relatively new, some ethical concerns have arisen on the matter.
  
 
== History ==
 
== History ==
The idea of artificial intelligence stems from the study of mathematical logic and philosophy. The first theory that suggests a machine can simulate any kind of formal reasoning is the Church-Turing thesis, proposed by Alonzo Church and Alan Turing. Since the 1950s, AI researchers explored the idea that any human cognition can be reduced to algorithmic reasoning, and had based research in two main directions. The first is creating artificial neural networks, systems that model the biological brain. The second is developing symbolic AI (also known as GOFAI) systems that are based on human-readable representations of problems solved by logic programming from the 1950s to the 1990s, before shifting into focus on subsymbolic AI due to technical limitations.  
+
The idea of artificial intelligence stems from the study of mathematical logic and philosophy. The first theory that suggests a machine can simulate any kind of formal reasoning is the [https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis Church-Turing thesis], proposed by [https://en.wikipedia.org/wiki/Alonzo_Church Alonzo Church] and [https://en.wikipedia.org/wiki/Alan_Turing Alan Turing]. Since the 1950s, AI researchers explored the idea that any human cognition can be reduced to algorithmic reasoning, and had based research in two main directions. The first is creating [https://en.wikipedia.org/wiki/Artificial_neural_network artificial neural networks], systems that model the biological brain. The second is developing symbolic AI (also known as [https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence GOFAI])<ref>Williams, M., & Haugeland, J. (1987). Artificial Intelligence: The Very Idea. Technology and Culture, 28(3), 706. https://doi.org/10.2307/3105016</ref> systems that are based on human-readable representations of problems solved by logic programming from the 1950s to the 1990s, before shifting into focus on subsymbolic AI due to technical limitations.  
 +
 
 +
The first documented use of artificial intelligence in psychotherapy is the chatbot [https://en.wikipedia.org/wiki/ELIZA ELIZA]<ref>Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168</ref>, developed from 1964 to 1966 by Joseph Weizenbaum. Eliza is created to be a pseudo therapist that simulates human conversations using pattern matching techniques but has no framework in contextualizing any input. ELIZA was written in [https://en.wikipedia.org/wiki/SLIP SLIP] and trained on primarily the DOCTOR script that simulated interactions [https://en.wikipedia.org/wiki/Carl_Rogers Carl Rogers] has with his patients — notably repeating what the patient has said back at them. While ELIZA had been primarily developed to highlight the superficial interactions between AI and humans and was not aimed to perform recommendations to patients, Weizenbaum observed that many did believe the robot understood them<ref>Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168</ref>. Subsequent chatbots, such as [https://en.wikipedia.org/wiki/PARRY PARRY], simulating a patient with natural schizophrenia, are also successful. Computer-to-computer therapeutic interactions were also observed, with ELIZA acting as a therapist to PARRY.
 +
 
 +
In the 1980s, psychotherapists started to investigate the usage of artificial intelligence clinically<ref>Glomann, L., Hager, V., Lukas, C. A., & Berking, M. (2018). Patient-Centered Design of an e-Mental Health App. Advances in Intelligent Systems and Computing, 264–271. https://doi.org/10.1007/978-3-319-94229-2_25</ref>, primarily highlighting the possibilities of using logic based programming in quick intervention methods, such as in brief [https://en.wikipedia.org/wiki/Cognitive_behavioral_therapy cognitive and behavioral therapy] (CBT). This kind of therapy does not focus on the underlying causes of mental health ailments, rather, it triggers and supports a change in behavior and cognitive distortions<ref>Benjamin, C. L., Puleo, C. M., Settipani, C. A., Brodman, D. M., Edmunds, J. M., Cummings, C. M., & Kendall, P. C. (2011). History of Cognitive-Behavioral Therapy in Youth. Child and Adolescent Psychiatric Clinics of North America, 20(2), 179–189. https://doi.org/10.1016/j.chc.2011.01.011</ref>. However, technical limitations such as the lack of sophisticated and development of logical systems, and the lack of breakthrough in artificial intelligence technology, as well as the decrease in funding in AI technology by the [https://en.wikipedia.org/wiki/Strategic_Computing_Initiative Strategic Computing Initiative] had led research in this field to stagnate until the mid 1990s when the internet became accessible to the general public.
 +
Currently, artificial intelligence is becoming increasingly widespread in psychotherapy, with developments focusing on data logging and building mental models of patients.
 +
 
 +
[[File:4.jpg.png|thumb|four.]]
 +
<ref>things that go into citations</ref>
 +
 
 +
== development ==
  
The first documented research of chatbots in psychotherapy is the chatbot ELIZA, developed from 1964 to 1966 by Joseph Weizenbaum. Eliza is created to be a pseudo Rogerian therapist that simulates human conversations. ELIZA was written in SLIP and trained on primarily the DOCTOR script that simulated interactions Carl Rogers has with his patients — notably repeating what the patient has said back at them. While ELIZA had been primarily developed to put emphasis on the superficial interactions between AI and humans and was not aimed to perform recommendations to patients, Weizenbaum observed that many believed the robot understood them. Subsequent chatbots, such as PARRY, simulating a patient with natural schizophrenia, are also successful. Computer-to-computer therapeutic interactions were also observed, with ELIZA acting as a therapist to PARRY.
+
== examples ==
  
In the 1980s, psychotherapists started to investigate the usage of artificial intelligence clinically, primarily the possibilities of using logic based programming in quick intervention methods, such as in brief cognitive and behavioral therapy (CBT). This kind of therapy does not focus on the underlying causes of mental health ailments, rather, it triggers and supports a change in behavior and cognitive distortions. However, technical limitations such as the lack of sophisticated and development of logical systems, and the lack of breakthrough in artificial intelligence technology, as well as the decrease in funding in AI technology by the Strategic Computing Initiative had led research in this field to stagnate until the mid 1990s when the internet became accessible to the general public.
+
== applications ==
 +
=== chatbots ===
 +
=== self-guided treatments ===
 +
=== therapeutic robots ===
  
Currently, artificial intelligence is becoming increasingly widespread in psychotherapy, with developments focusing on data logging and building mental models of patients.
+
== ethical concerns ==
 
+
== Development ==
+
The following information describes the general development of chatbots in psychotherapy usage, though exceptions may exist.
+
Three concepts are highlighted as the main considerations for the construction of a chatbot: intentions (what the user demands or the purpose of the chatbot), entities (what topic the user wants to talk about) and dialogue (the exact wordings the bot use). Most psychologist-chatbots are similar in the latter two categories, differing in their intentions. The chatbots can be separated into two main categories: chatbots built for psychological assessments, and chatbots built for help interventions.
+
  
Given the interactive nature of therapy, most artificial intelligence tools developed around psychology are based on the Turing Test. The test proposes that if a human cannot distinguish between a machine and a human, the machine can be considered “intelligent”. A machine must be able to mimic a human being to build trust with the patient and for the patient to be willing to talk to the chatbot.
+
=== The ELIZA effect ===
 +
=== crisis management of artificial intelligence ===
 +
=== data collection ===
  
Chatbots rely on natural language processing (NLP) to generate their responses. Natural language processing is the programming of computers to understand human conversational language. In the past (such as in early cases such as ELIZA or AliceBot (which stands for Artificial Linguistic Internet Computer Entity) ) these tools were mostly developed on preconfigured templates with human hard-coding. They had no framework in contextualizing any input and struggled with following a coherent conversation. In the present day, NLP tools such as ELMO and BERT are used for feature extractions in large texts while other tools like GPT-2 are utilized to generate response paragraphs given particular topics. The prevalence of chatbots in different fields also gave rise to programs developed specifically for chatbot development. Amongst these, some notable examples include IBM Watson, API.ai and Microsoft LUIS.
+
== limitations ==
  
In chatbots built for psychological assessments, the chatbot is either trained with or hard-coded in a pre-assessment interview that asks for demographic information about the patient, including age, name, occupation, gender. These chatbots act more like virtual assistants, and users are explicitly warned to use concise language for the virtual assistant to recognize entities with more accuracy.
+
== see also ==
  
Chatbots built for help intervention puts a focus on intention and provide evidence based treatment, such as in cognitive behavioral therapy.
+
== references ==
 +
<references/>

Revision as of 20:48, 25 January 2022

Back • ↑Topics • ↑Categories

Artificial intelligence in therapy is an overarching term used to describe the use of machine-learning algorithms or software to mimic human understanding to assist or replace humans in multiple aspects of therapy.

Artificial intelligence is intelligence demonstrated by machines, based on input data and algorithms alone. Its primary goal is to perceive its environment and take action that maximizes its goals.[1] As such, different from human intelligence, artificial intelligence can sometimes work as a black box, with little reasoning behind its conclusion but accurate nonetheless.

The primary aim of artificial intelligence in therapy is to (1) analyze the relationships between symptoms exhibited by patients and possible diagnosis, and (2) act as a substitute or addition to human therapists due to the current shortage of therapists worldwide. Companies are developing technology through decreasing therapists overloading and better monitoring of patients.

As artificial intelligence use in therapy is still relatively new, some ethical concerns have arisen on the matter.

History

The idea of artificial intelligence stems from the study of mathematical logic and philosophy. The first theory that suggests a machine can simulate any kind of formal reasoning is the Church-Turing thesis, proposed by Alonzo Church and Alan Turing. Since the 1950s, AI researchers explored the idea that any human cognition can be reduced to algorithmic reasoning, and had based research in two main directions. The first is creating artificial neural networks, systems that model the biological brain. The second is developing symbolic AI (also known as GOFAI)[2] systems that are based on human-readable representations of problems solved by logic programming from the 1950s to the 1990s, before shifting into focus on subsymbolic AI due to technical limitations.

The first documented use of artificial intelligence in psychotherapy is the chatbot ELIZA[3], developed from 1964 to 1966 by Joseph Weizenbaum. Eliza is created to be a pseudo therapist that simulates human conversations using pattern matching techniques but has no framework in contextualizing any input. ELIZA was written in SLIP and trained on primarily the DOCTOR script that simulated interactions Carl Rogers has with his patients — notably repeating what the patient has said back at them. While ELIZA had been primarily developed to highlight the superficial interactions between AI and humans and was not aimed to perform recommendations to patients, Weizenbaum observed that many did believe the robot understood them[4]. Subsequent chatbots, such as PARRY, simulating a patient with natural schizophrenia, are also successful. Computer-to-computer therapeutic interactions were also observed, with ELIZA acting as a therapist to PARRY.

In the 1980s, psychotherapists started to investigate the usage of artificial intelligence clinically[5], primarily highlighting the possibilities of using logic based programming in quick intervention methods, such as in brief cognitive and behavioral therapy (CBT). This kind of therapy does not focus on the underlying causes of mental health ailments, rather, it triggers and supports a change in behavior and cognitive distortions[6]. However, technical limitations such as the lack of sophisticated and development of logical systems, and the lack of breakthrough in artificial intelligence technology, as well as the decrease in funding in AI technology by the Strategic Computing Initiative had led research in this field to stagnate until the mid 1990s when the internet became accessible to the general public. Currently, artificial intelligence is becoming increasingly widespread in psychotherapy, with developments focusing on data logging and building mental models of patients.

four.

[7]

development

examples

applications

chatbots

self-guided treatments

therapeutic robots

ethical concerns

The ELIZA effect

crisis management of artificial intelligence

data collection

limitations

see also

references

  1. Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press. ISBN 978-0-19-510270-3. Archived from the original on 26 July 2020. Retrieved 24 Jan 2022.
  2. Williams, M., & Haugeland, J. (1987). Artificial Intelligence: The Very Idea. Technology and Culture, 28(3), 706. https://doi.org/10.2307/3105016
  3. Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
  4. Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
  5. Glomann, L., Hager, V., Lukas, C. A., & Berking, M. (2018). Patient-Centered Design of an e-Mental Health App. Advances in Intelligent Systems and Computing, 264–271. https://doi.org/10.1007/978-3-319-94229-2_25
  6. Benjamin, C. L., Puleo, C. M., Settipani, C. A., Brodman, D. M., Edmunds, J. M., Cummings, C. M., & Kendall, P. C. (2011). History of Cognitive-Behavioral Therapy in Youth. Child and Adolescent Psychiatric Clinics of North America, 20(2), 179–189. https://doi.org/10.1016/j.chc.2011.01.011
  7. things that go into citations