Prediction: ChatGPT is the MySpace of AI – what this Reddit hot take gets right and wrong
A recent post on r/ArtificialInteligence argues that OpenAI has fallen behind rivals like Claude, Gemini, Grok, Qwen, DeepSeek and Kimi – and that ChatGPT is heading for a MySpace-style decline. You can read the full discussion here: Prediction: ChatGPT is the MySpace of AI.
“ChatGPT is mediocre, sanitized, and not a serious tool.”
The author’s core claim is that Anthropic’s Claude (Opus/Sonnet) leads for writing and coding, Google’s Gemini is the best all-rounder, and that Grok, Qwen and DeepSeek each bring differentiated strengths. They also suggest OpenAI’s culture and business model are drifting toward mediocrity. Strong words – and worth unpacking.
What the Redditor actually argues about each model
“Opus/Sonnet are incredible for writing and coding. Gemini is a wonderful multi-tool.”
- OpenAI – ChatGPT: “mediocre”, “sanitized”, falling behind even strong open-source options.
- Anthropic – Claude (Opus/Sonnet): “incredible for writing and coding”.
- Google – Gemini: “a wonderful multi-tool”.
- xAI – Grok: “unique strengths and different perspectives”.
- Alibaba/Team – Qwen: “unique strengths”.
- DeepSeek: “unique strengths”.
- Moonshot – Kimi: “has potential”.
These are qualitative judgements based on one user’s experience. No benchmarks, costs or latency figures are provided (not disclosed).
How fair is the “MySpace of AI” comparison?
Comparing LLMs is hard because results depend on your task, prompts and constraints. Three terms to keep in mind:
- Alignment – the safety and style guardrails a model follows. Tighter alignment can feel “sanitised” but reduces harmful outputs.
- Context window – how much information a model can consider at once. Larger windows help with long documents and multi-step tasks.
- RAG (retrieval-augmented generation) – connecting a model to your own data for more accurate, source-grounded answers.
The Redditor’s critique rings true for some workflows: stricter filters can frustrate advanced users; different models do excel at different things. But “doomed” is a big call. OpenAI still has significant distribution, developer mindshare and a maturing enterprise stack – factors that don’t show up in a single user’s session. Equally, rivals are moving fast and, in some areas, genuinely ahead.
Quick model-by-model snapshot from the Reddit post
| Model | Author’s claim | Notes for UK buyers |
|---|---|---|
| ChatGPT (OpenAI) | Mediocre; overly sanitised; behind others. | Check data processing terms, UK/EU data flows and safety settings. Evaluate with your own prompts. |
| Claude (Anthropic) | Excellent for writing and coding. | Often praised for helpfulness and tone. Confirm availability and compliance posture for your sector. |
| Gemini (Google) | Strong “multi-tool”. | Broad ecosystem integrations. Review data retention controls and UK/EU options. |
| Grok (xAI) | Distinct perspective and strengths. | Access and enterprise features vary. Assess reliability for regulated use. |
| Qwen | Unique strengths; open-source variants. | Attractive for self-hosting. Check licences and data sovereignty. |
| DeepSeek | Unique strengths. | Consider on-prem options and vendor transparency requirements. |
| Kimi (Moonshot) | Has potential. | Regional availability varies; evaluate support and documentation. |
Why this matters to UK teams in 2025
Whether or not ChatGPT is “the next MySpace”, the market has diversified. For UK organisations, that means choice – and responsibility.
- Privacy and data protection – ensure GDPR compliance, clear data processing agreements and options to disable training on your data.
- Security and sovereignty – know where data is stored/processed and whether UK or EU residency is available (or self-host if required).
- Cost control – model switching can reduce spend, but watch for hidden costs in context window usage and tool calls (not disclosed in the post).
- Accuracy and safety – alignment levels vary. Test for hallucinations, bias and refusal rates against your real workloads.
- Vendor risk – some providers are new or evolving quickly. Build exit options and avoid lock-in.
A practical approach: evaluate, mix and match
Don’t migrate on vibes. Run a structured bake-off on your tasks: coding, analysis, drafting, data extraction and creative ideation. Measure quality, latency and cost per task, not just raw “impressiveness”. Keep a small portfolio – one model for long-form writing, another for code, and a third for vision or spreadsheet automation can be perfectly sensible.
If you are standardising on ChatGPT today, you can still build real workflows. For example, I’ve written a step-by-step guide for linking ChatGPT with Google Sheets to automate analysis and reporting: How to connect ChatGPT and Google Sheets. The same principles apply if you swap in other providers via their APIs.
Is ChatGPT really the MySpace of AI?
Maybe – if it stops shipping, ignores power users and loses the developer ecosystem. Maybe not – if it continues to improve quality, expands enterprise controls and remains the easiest place to build. The Reddit post is a useful nudge: stop treating LLM choice as a religion and start treating it like procurement.
“It is important to realise where they stand – behind basically everyone.”
I wouldn’t go that far. But it is fair to say: in 2025, there isn’t a single “best” model. There is a best fit for your use case, risk profile and budget. Test widely, document results, and be ready to switch.
Key takeaways for a UK audience
- Run your own evaluations with representative prompts and data. Don’t rely on marketing or single-user anecdotes.
- Prioritise compliance, auditability and data protection from day one – especially in finance, health, legal and public sector.
- Design for portability: abstract your app to swap models with minimal code changes.
- Keep humans in the loop for high-stakes outputs. Track error rates and establish review policies.
The “MySpace” headline is provocative, but the underlying message is practical: the AI stack is plural now. Use that to your advantage.