Wednesday, June 25, 2025

Navigating AI for Mental Health Support



The Rise of AI

The technology behind Large Language Models (LLMs) has been developing for years, early versions have been powering search engines since 2018, but there were no tools to allow the public to converse with them. This changed in 2022 with the public release of ChatGPT, which had a conversation interface. There is a growing area of research which is exploring the potential and limitations of Gen-AI Chatbots, that is the type of AI that can create new content and ideas, while a chatbot engages in authentic two-way conversations.

A recent clinical trial tested a Gen-AI chatbot called 'Therabot’ (Heinz, et al, 2025). The study included participants with diagnosed conditions such as Major Depressive Disorder and Generalised Anxiety Disorder and reported promising results, including an average of 51% reduction in depression symptoms and a 31% reduction in generalised anxiety symptoms over four weeks. Users reported a significant 'therapeutic alliance' with the chatbot, a factor often linked to positive outcomes in human therapy.

However, this ever-expanding field of research also highlights the limitations and risks of such tools, emphasising ongoing concerns regarding safety, accurate crisis response, and the absence of genuine human understanding or accountability.

Potential Benefits

  • Anonymous Support – AI can act as a sounding board for discussing thoughts and feelings anonymously
  • Unlimited Availability – immediate, round-the-clock support with quick answers to questions about symptoms, conditions, and therapy types.
  • Educational – AI will have information on all therapeutic modalities and can be used for ideas for mindfulness exercises, journaling prompts, and thought reframing techniques.


Possible Risks and Limitations

  • Lack of Empathy and Human Connection – AI cannot genuinely empathise with or understand the nuances of human experience, an LLM will never have experienced any human emotions.
  • No Crisis Support – AI isn’t equipped to intervene, diagnose or connect users to emergency services.


Caution

  • Accuracy & Bias: AI models learn from vast datasets, which can contain biases or misinformation. Their responses might be incorrect or even harmful.
  • No Diagnosis or Personalised Treatment: AI cannot diagnose mental health conditions or develop tailored treatment plans based on a full clinical assessment.
  • Confidentiality & Privacy Concerns: Shared data isn’t confidential or safe. Users have no guarantee how their sensitive information will be used or stored.
  • Lack of Accountability & Regulation: There's no licensing body for AI. If an AI gives harmful advice, there's no professional accountability.
  • Superficiality: AI interactions are superficial, lacking the depth needed to address complex trauma, deep-seated issues, or personality disorders.
  • Risk of Dependency: Users could become overly reliant on AI, potentially isolating themselves from real human connections.


The value of a human psychotherapist in an AI world


AI has its uses, but it isn’t a panacea. AI offers new possibilities to both clients and therapists, for example, I’m happy to use it to make sure my worksheets are as useful and concise as possible. AI can be helpful in creating journalling prompts, and while I’m willing to go through AI scripts with clients, I’m aware of the limitations. AI isn’t a substitute for professional mental health care. As a therapist I focus on the following, Genuine empathy, active listening, and ability to build a therapeutic relationship.
Ability to assess complex situations, work in collaboration with clients, and tailor treatment.
Understanding of non-verbal cues, tone, and cultural nuances.
Ethical and confidential framework, I have to be accredited, insured, and undertake regular clinical supervision.

The risks of AI interaction from AI itself

"Despite my capacity for generating vast information and mimicking human interaction, I lack true consciousness, genuine empathy, and lived experience. My responses, derived from patterns, are susceptible to factual inaccuracies and ingrained biases, fundamentally limiting my ability to offer nuanced ethical judgment, a true therapeutic connection, or the real-time, compassionate understanding that only a human can provide." (Gemini, July 2025)

REBT is a deeply collaborative process. I work with you to identify, challenge, and dispute irrational beliefs. This isn't a generic output; it's a dynamic, personalised journey of discovery and change.





No comments:

Post a Comment

Why Confidence Isn't About What You Do, But Who You Are

It's often said that confidence comes from experience or competence. The more you do something, the more confident you become. But what ...