Almost nobody I know knows anything about AI – why that still happens in 2025
A Redditor asked why most people they know either don’t use AI or find it “creepy”. One friend uses ChatGPT to soften legal emails; everyone else shrugs.
“Having a robot that can do everything for me would be the greatest thing EVER.”
It’s a familiar picture across the UK. Despite the headlines, broad awareness hasn’t translated into everyday habits. Here’s what’s going on, why it matters, and practical ways to cross the chasm from hype to useful, repeatable value.
What keeps everyday people from using AI: adoption barriers explained
The “crossing the chasm” problem
Most technologies follow a pattern: innovators and early adopters experiment, then there’s a gap before the early majority picks it up. AI is still straddling that gap for non-technical users. The benefit is obvious to a minority; the rest see hassle, risk or social awkwardness.
Seven reasons normal users hesitate
- Vague value – “Ask me anything” is too broad. Without a clear job-to-be-done (e.g., rewrite this message, summarise this PDF), people bounce off.
- Trust and privacy – UK users are rightly wary about putting sensitive information into a model. Unclear data handling is a blocker.
- Bad first run – If your first prompt returns nonsense (a “hallucination”), you stop. Good prompting and better tools reduce this, but most don’t get that far.
- Cost and friction – Subscriptions, sign-ins, and app switching create small barriers that add up. If work IT locks it down, adoption stalls.
- Social norms – Saying “I used AI to write this” still feels awkward to some. In some workplaces, it’s perceived as cheating rather than tooling.
- Robot confusion – Many think AI equals humanoid robots. In reality, most value today is software: writing, analysis, data extraction, search.
- Risk without guardrails – If there’s no policy, training or sandbox, people don’t experiment with real work. Sensible caution becomes avoidance.
What this means for UK users and teams
Privacy, data protection and compliance
Under UK GDPR, you’re responsible for how personal data is processed. If you paste sensitive client or HR material into an AI tool, you need a lawful basis and clarity on how the provider processes data. The Information Commissioner’s Office has practical guidance on this.
- Check vendor data controls and enterprise settings before using real personal or confidential data.
- Use redaction or anonymisation where possible.
- Prefer accounts and settings that let you opt out of training on your content.
See the ICO’s AI and data protection guidance: ico.org.uk/for-organisations/ai/.
Costs and availability
Most leading tools have a free tier and a paid tier. Pricing changes often. Always verify on official pages:
- OpenAI ChatGPT pricing: openai.com
- Anthropic Claude: anthropic.com
- Google Gemini: ai.google
- Microsoft Copilot: microsoft.com/copilot
From hype to habit: practical ways to cross the chasm
Start with narrow, low-risk use cases at home
Large language models (LLMs) are text prediction systems trained on large datasets. They’re great at pattern-heavy tasks and weak at facts you can’t verify. Start where errors are easy to catch.
- Rewrite for tone – like the Reddit example. Ask: “Make this professional, friendly, and concise. Keep all facts.”
- Summarise long PDFs – “Produce bullet-point notes and action items.” Always skim the source to verify.
- Draft letters and forms – “Create a council tax dispute letter. Use UK spelling and reference evidence I list below.”
- Planning – “Design a two-week revision plan for GCSE maths, including daily tasks and resources.”
- Compare options – “List pros and cons of three broadband providers. Include contract length and exit fees.” Then check official sites.
- Data wrangling – “Clean this CSV: normalise dates to DD/MM/YYYY, remove duplicates, export as CSV.”
If you work in Sheets or Excel, you can link models to reduce copy-paste. Here’s a practical guide for Google Sheets: Connect ChatGPT and Google Sheets.
Piloting AI at work without the chaos
- Pick one measurable workflow – e.g., customer email triage, meeting minutes, or first-draft job specs.
- Define success – response time down 30%, quality ratings up, or hours saved per week.
- Choose a compliant tool – Check data processing terms, UK/EU data residency options if required, and admin controls.
- Create a safe prompt and policy – Clear do/don’t: no sensitive data, all outputs reviewed, source verification for facts.
- Train for 30 minutes – How to ask, how to check, how to escalate.
- Review in four weeks – Keep what works, drop what doesn’t, and expand gradually.
Common risks and sensible guardrails
- Hallucinations – Models can invent citations or details. Mitigation: verify facts, require links to sources, or use retrieval.
- Bias and tone – Outputs can reflect biased training data. Mitigation: set tone and inclusion guidelines; review critical content.
- Confidentiality – Don’t paste secrets into consumer tools. Mitigation: enterprise plans, redaction, or on-prem solutions if needed.
- Overreliance – Keep human judgement on anything legal, medical, financial or safety-critical.
Retrieval-augmented generation (RAG) is a pattern where the model is given trusted documents at query time. It improves accuracy and auditability because answers cite your sources.
Robots versus AI assistants: clearing the confusion
People often picture humanoid robots doing chores. That’s not today’s mainstream value. Most gains in 2025 come from software: writing, search, analysis, data entry, and workflow automation. If physical robots feel “creepy” to your friends, that’s fine. You can enjoy major benefits from an invisible assistant inside your email, docs and spreadsheets.
Why this Reddit thread matters for the UK
The gap between curiosity and daily use is still wide. The way across is not more hype, but simple, trusted workflows with clear guardrails. When the first experience is safe, useful and repeatable, people stick with it.
“Why don’t normal everyday people know anything about AI or think it’s cool?”
Many will, once it quietly saves them an hour a week. Start narrow, measure results, and respect privacy. That’s how we cross the chasm.