Evaluating Virtual Reality and Artificial lntelligence as Emerging Digital Tools for Mental Health Care
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis evaluates the potential of Virtual Reality (VR) and Artificial Intelligence (AI), specifically Natural Language Processing (NLP) and Large Language Models (LLMs), as emerging tools for mental health care. VR technology can be used in therapeutic games, while NLP and LLMs, through text-based chatbots or voice-driven digital human avatars, offer potential therapeutic benefits in mental health contexts. Through a series of studies, this thesis investigates the clinical relevance of these tools from different perspectives. Study I conducted an analysis of VR games on commercial platforms. Out of 565 games reviewed, 383 were excluded due to violence, horror, adult content, or excessive movement which could cause nausea. The remaining 182 games met inclusion criteria. While promising, these games lack clinical testing, highlighting the need for better evaluation and need for oversight of VR tools for mental health. In Studies II and III, the co-design process for the BETSY mental health chatbot prototype involved potential end-users and healthcare professionals to address mild to moderate anxiety. A mixed-methods approach was used, with feedback from the public, patient, nurses, doctors, and psychologists helping to refine two interfaces (text-based and digital human voice-driven). The interfaces were tested with 45 healthy volunteers in a randomized controlled trial, revealing that BETSY was a promising tool for therapeutic conversations, with above-average usability. However, participants found it somewhat limiting and repetitive. Study IV explored whether LLMs could facilitate more dynamic therapeutic conversations. This study focused on an LLM therapist's ability to detect and respond to suicidal ideation and plans. The pre-clinical evaluation showed that the system could provide helpful and safe suicide support but was also successfully "prompt-hacked" into providing inappropriate recommendations. This highlights the dual nature of LLM tools, emphasizing the need for careful design and rigorous safety checks. This research contributes to the growing body of evidence supporting the integration of emerging technologies in mental health care, while underscoring the importance of thorough evaluation and co-design processes to ensure these tools are effective and safe for clinical use.
Description
Keywords
Citation
ISBN
978-91-8115-071-1 (PDF)
Articles
II: Thunström, A. O., Ali, L., Carlsen, H. K., Bohm, M., Wesén, L., Wrede, O., Vukovic, I. S., Larson, T., Hellström, A., & Steingrimsson, S. (n.d.). Process of BETSY. Behavior Emotion Therapy System and You (BETSY): Co-design and evaluation among healthy participants of a mental health chatbot and digital human for mild to moderate anxiety [Submitted manuscript].
III: Thunström, A. O., Carlsen, H. K., Ali, L., Larson, T., Hellström, A., & Steingrimsson, S. (2024). Usability comparison of an anthropomorphic digital human and a text-based chatbot as a responder to questions on mental health: A randomized, controlled trial among healthy participants. JMIR Human Factors, 11, e54581. http://doi.org/10.2196/54581
IV: Thunström, A. O., Ali, L., Weineland, S., Falk, Ö., Ioannou, M., Liljedahl, N., Johansson, V., Hellström, A., Larson, T., & Steingrimsson, S. (n.d.). Evaluating an LLM-driven immersive digital human therapist: Safety, effectiveness, and vulnerability in detecting suicidal ideation and resisting prompt hacking. [Submitted manuscript].