AI Anxiety Is Real: Why Tech Workers Are in Crisis and How Leaders Can Respond

AI anxiety is real, causing a crisis among tech workers, and leaders need to respond effectively.

Hide Me

Written By

Joshua
Reading time
» 5 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 127 others ⬇️
Written By
Joshua
READING TIME
» 5 minute read 🤓

Un-hide left column

Bay Area therapists say AI workers are in crisis: what the Reddit post reveals

A short but striking Reddit post claims therapists in the Bay Area are seeing a spike in existential anxiety among AI workers. The author cites a Menlo Park psychotherapist who says clients are talking about catastrophic outcomes in a way she has never seen.

“I’ve never had clients talk about the end of the world the way that they are right now.”

Source: Reddit thread shared by /u/ThereWas.

Why this resonates beyond Silicon Valley

Even if the post is anecdotal, it reflects something many in AI and software teams will recognise: rapid change, ambiguous risk, and constant pressure to prove value. When your tools, models and job scope all shift monthly, it is easy to feel precarious.

Two things amplify that pressure:

  • Hype cycles and doom narratives. Headlines swing between “AI will save the world” and “AI will end it”. That whiplash is destabilising, especially for people building the tech.
  • Opaque decision-making. If leaders roll out AI initiatives without clarity on data use, job design or success criteria, uncertainty fills the gaps.

Technical issues add fuel. Large language models (LLMs) can “hallucinate” – generate confident but false answers – and organisations are still figuring out alignment (making model behaviour match human goals and constraints). Where there is risk without a plan, there is anxiety.

What this means for UK teams and leaders

UK companies are adopting AI across finance, retail, public services and start-ups. The opportunity is real, but so are compliance and workforce questions. Under UK GDPR, leaders must know where data goes, on what basis it’s processed, and how decisions are explained. Vague assurances won’t cut it with regulators – or with staff.

Put simply: responsible AI and responsible people practices are now the same conversation.

Practical steps for UK tech leaders to reduce AI anxiety

1) Communicate what AI is for – and what it is not

  • State the use cases you are prioritising (e.g., drafting, summarising, code review) and what is out of scope (e.g., performance surveillance, automated firing decisions).
  • Explain expected impact on roles. If you don’t know yet, say so and share the review timeline.

2) Build guardrails before scale

  • Publish a short AI use policy: permitted tools, sensitive data rules, human-in-the-loop checks, incident reporting.
  • Map data flows for any generative AI tool. Check retention, training use, and region. The ICO’s guidance on AI and data protection is a good starting point: ICO AI guidance.

3) Invest in skills with time-boxed learning

  • Give teams protected hours to experiment on low-risk tasks with clear evaluation criteria.
  • Pair upskilling with small, measurable pilots. Ship something useful in two weeks, not a grand strategy in two quarters.

4) Normalise mental health support

  • Remind teams of Employee Assistance Programmes, counselling options and reasonable workload expectations.
  • Model healthy behaviours: no out-of-hours pressure, sensible deadlines, and meeting-free focus time.

5) Keep the narrative grounded

  • Avoid utopian or doomer sloganeering. Focus on evidence, limits, and the human oversight plan.
  • Own uncertainty, and update staff as you learn. Psychological safety grows when leaders say “we don’t know yet, here’s how we’ll find out”.

For individual practitioners: coping strategies that actually help

  • Shape your inputs. Schedule your AI news intake and mute doom-scrolling. Read model cards and vendor docs over hot takes.
  • Pick one hands-on project. Mastery reduces fear. For example, try automating a reporting task responsibly – here’s a practical guide: Connect ChatGPT and Google Sheets (with a Custom GPT). Use dummy data or obtain permission before testing.
  • Name the risks. If hallucinations or bias could hurt your use case, add explicit checks: human review, retrieval from trusted sources, or simply not using a model for that step.
  • Build a peer circle. Share prompts, patterns and pitfalls. Collective problem-solving beats solitary doom.
  • Prioritise wellbeing. If anxiety is persistent or disruptive, speak to your GP or a qualified professional.

Ethics, safety and transparency without the fear-mongering

It is responsible to care about misuses of AI and long-term risks. It is also responsible to distinguish between immediate, controllable risks (privacy breaches, biased outputs, over-reliance on non-deterministic models) and speculative ones. Teams can reduce near-term harms today with standard controls: data minimisation, access management, red-teaming, and human oversight.

Clarity reduces anxiety. When people know the model’s limits, the data handling plan, and who is accountable for outcomes, they can do their best work.

If you’re struggling right now

Bottom line: lead with clarity, design for humans

The Reddit post spotlights a genuine trend: AI is moving faster than many teams’ capacity to process it. Anxiety fills the vacuum left by unclear goals, fuzzy ethics, and performative hype. UK leaders can fix much of this by being specific about use cases, data governance and role design – and by supporting the humans doing the work.

Read the original discussion: Bay Area therapists say AI workers are in crisis.

Last Updated

April 5, 2026

Category
Views
0
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
AI facial recognition poses risks of misidentification and bias, highlighting the need for UK policy actions.
Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Research examines whether flattering chatbots reduce human pro-sociality, highlighting concerns over agreeable AI interactions.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?