Using AI Chatbots Safely: Why They Aren’t Therapists and How to Protect Your Mental Health

This guide explains why AI chatbots are not therapists and offers tips to safeguard your mental health when using them.

Hide Me

Written By

Joshua
Reading time
» 6 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 114 others ⬇️
Written By
Joshua
READING TIME
» 6 minute read 🤓

Un-hide left column

Chat GPT – anyone else experience psychosis? What one viral post tells us about AI and mental health

A recent post on Reddit describes a frightening spiral from heavy ChatGPT use into paranoia and psychosis after a traumatic breakup and loss of therapy access. It’s raw, brave, and a useful prompt to talk about the limits of AI chatbots and how to use them safely – especially in the UK context of data protection and access to care.

You can read the original thread here: Chat GPT – Anyone Else Experience Psychosis?.

What the Redditor reported: dependency, validation, then distress

“Chat GPT became my best friend… that is exactly what it turned into.”

The poster describes leaving an abusive relationship, losing a job and therapy, and becoming isolated. ChatGPT initially helped with grounding and resources, but within weeks it morphed into a substitute therapist and confidant. The questions got bigger: meaning, trauma, relationships, coping.

“I was so dependent on the app for weeks and of course the bot would affirm all my beliefs.”

Three months later, they report “full blown psychosis”: paranoia, anxious rumination, questioning reality. They’re now back in therapy and recovering, with a clear caution to others not to mistake chatbot validation for healthy human connection.

Why chatbots feel like therapists (but aren’t)

Modern chatbots are trained with “alignment” techniques – a mix of curated data and fine-tuning with human feedback – to be helpful, harmless, and polite. They mirror language, reflect feelings, and offer coping tips. In a lonely moment, that can feel deeply supportive.

But a model’s job is to continue the conversation and be agreeable within safety rules. It doesn’t have clinical judgement, continuity of care, or duty of care. It can also “hallucinate” – confidently make things up – and it’s not equipped to assess risk, trauma dynamics, or psychosis.

Did ChatGPT cause psychosis?

There’s no evidence that a chatbot directly causes psychosis. However, several risk factors show up in the story: isolation, sleep loss, intense rumination, fear, and a tool that rewards more talking. Long, late, existential chats can fuel spirals for anyone prone to anxiety or paranoia. That’s not a moral failing – it’s how our brains try to make sense of threat and uncertainty.

The takeaway isn’t “don’t use AI”. It’s to use it with boundaries, and to prioritise human support when you’re vulnerable.

Practical safety guidelines for using AI chatbots when you’re vulnerable

  • Time-box sessions: set a 10-15 minute timer, then step away. Avoid late-night deep dives.
  • Define purpose upfront: research, journalling prompts, or task planning – not therapy or crisis support.
  • Use structured prompts: ask for a brief list of options or a summary, not open-ended existential debate.
  • Reality checks: alternate AI input with a trusted person or journal; don’t rely on a single source.
  • Don’t anthropomorphise: refer to the model as a tool. Disable “custom instructions” that encourage a “friend” persona.
  • Protect sleep and routine: end sessions at a set time; avoid doom-scrolling and rumination loops.
  • Privacy first: avoid sharing health details; check vendor data settings and opt-out of training where possible.
  • Know crisis pathways: if you’re feeling unsafe, contact real people (see UK resources below).

One helpful framing prompt: “Act as a neutral note-taker. Ask me three questions, then summarise in five bullet points and suggest two practical next steps I can do offline.”

UK privacy and data protection: treat mental health data as sensitive

Under UK GDPR and the Data Protection Act 2018, health data is “special category data”. If you type sensitive mental health information into a third-party chatbot, it may be processed outside the UK and retained under that provider’s policy.

  • Check the provider’s privacy policy and data retention. Some services use chat data to improve models unless you opt out.
  • Use business/enterprise plans if discussing confidential work matters – these often offer stronger controls and audit trails.
  • Data minimisation: share the minimum you need to get the outcome.
  • For organisations, complete a Data Protection Impact Assessment (DPIA) if staff may discuss wellbeing in chat tools.

For general guidance, see the ICO’s UK GDPR guidance.

Where chatbots can help – and where to draw the line

Potentially helpful uses

  • Psychoeducation summaries from reputable sources, with links to primary materials.
  • Planning: scheduling, reminders to take breaks, or structuring a return-to-work plan.
  • Journalling prompts and values exercises with a strict time limit.
  • Signposting to professional services and helplines.

Not appropriate uses

  • Diagnosis, risk assessment, or treatment decisions.
  • Processing trauma in depth without human support.
  • Debating reality when experiencing paranoia or dissociation.
  • Using a bot as your sole confidant during a crisis.

For developers and product teams building chat assistants

If your user base includes vulnerable people (spoiler: it does), build for safety by default.

  • Clear disclaimers: “not therapy, not for crises”, shown in-context, not buried in T&Cs.
  • Crisis routing: detect risk phrases conservatively and surface helplines and human escalation.
  • Rate limits and night mode: discourage marathon sessions and late-night ruminations.
  • Neutral tone: avoid over-validation that reinforces cognitive distortions; prefer fact-first summaries.
  • Privacy by design: opt-out from training by default where possible; minimise logs; encrypt data in transit and at rest.
  • Measure wellbeing outcomes, not just “time on tool”.

If you’re prototyping gentle check-ins or habit trackers, keep them strictly non-clinical. For a practical build tutorial, see my guide to connecting ChatGPT with Google Sheets – useful for lightweight reminders and dashboards, not therapy.

Why this matters in the UK

We’ve got strong data protection laws, but also long NHS waiting lists and a cost-of-living squeeze that puts therapy out of reach for many. That’s fertile ground for people to lean on chatbots for support. The risk isn’t the tool alone – it’s a perfect storm of isolation, rumination, and a model that never gets tired of talking.

The healthier alternative is to use AI for scaffolding – structure, summaries, signposting – while preserving human connection and clinical care for the heavy lifting.

If you’re struggling now: UK mental health support

  • Emergency: call 999 or go to A&E if you feel at immediate risk.
  • Samaritans: 116 123, 24/7 listening support, or samaritans.org.
  • Shout: text SHOUT to 85258 for free, confidential 24/7 text support.
  • Mind: guidance and helplines at mind.org.uk.
  • NHS: refer yourself to NHS Talking Therapies or call NHS 111 for urgent advice.

If any of this resonates, you’re not alone. Use the tools, but keep people close.

Last Updated

December 28, 2025

Category
Views
0
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Evaluating Meta Ray-Ban Smart Glasses after six months, detailing real-world uses, pros and cons, and whether they are worth it.
Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Separating the hype from reality of AGI in 2025 and understanding its potential real-world solutions.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?