Is the AI Bubble Worse Than the Dot-Com Era? GPUs, Costs and Unit Economics in 2025

Assessing if the AI bubble exceeds the dot-com era, focusing on GPU costs and unit economics in 2025.

Hide Me

Written By

Joshua
Reading time
» 6 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 104 others ⬇️
Written By
Joshua
READING TIME
» 6 minute read 🤓

Un-hide left column

Is this AI bubble nastier than the dot-com era? GPUs, margins and reality checks

A widely shared Reddit post argues today’s AI boom could end more painfully than the dot-com bubble. Not because AI is a fad, but because costs and margins look fragile, and much of the market’s value is “rented” from GPU suppliers.

The core claim is simple: demand for AI is real, but unit economics are shaky. If your gross margins depend on someone else’s GPU roadmap, you don’t control your destiny.

Most of gross margins in ai race is tied to someone else’s GPU roadmap.

Let’s unpack what that means, why it matters, and how UK builders and buyers can stay on the right side of the trade.

Dot-com vs AI: real demand, different cost curve

In 2000, many internet businesses had thin or non-existent revenue. Today, AI has clear utility across software development, knowledge work, customer support and analysis. The Reddit post acknowledges that demand looks defensible.

The difference is cost structure. Traditional software had near-zero marginal cost to serve an extra user. AI has two heavy cost centres:

  • Training – building models on large datasets using GPUs (graphics processing units).
  • Inference – running models to answer prompts in production, also GPU-intensive.

That ongoing inference bill makes AI more like utilities than software. If your pricing power doesn’t outpace GPU and energy costs, margins compress.

GPU dependency and unit economics in 2025

Unit economics refers to revenue and cost per unit of usage (a prompt, an API call, a user). In AI, key levers include model size, context window (how much text the model reads at once), latency, and quality. Larger models with longer context windows often cost more to run.

This is where the “renting your margins” problem bites. Many AI companies buy cloud GPUs or model API calls. If the upstream provider changes pricing or availability, your gross margins move with them. Unless you can lift prices or reduce compute per task, you’re exposed.

Common tactics to improve unit economics without losing quality:

  • Right-size models – use smaller models for routine tasks, larger only when needed.
  • RAG (retrieval-augmented generation) – fetch relevant documents to guide the model, reducing hallucinations and often allowing smaller models to perform better.
  • Caching – reuse previous responses where appropriate.
  • Routing – send tasks to the cheapest model that meets quality requirements.
  • Batching and rate control – lower per-request overheads.

Examples cited in the post: Humane, Stability AI, Figure

The Reddit author points to several high-profile cases as signals of weak fundamentals:

  • Humane – after heavy hype around its AI Pin, the company shut down and sold assets to HP for around $116m, leaving customers with devices that no longer functioned, according to the post.
  • Stability AI – the post says it reported less than $5m in revenue and burned over $30m in Q1 2024.
  • Figure – cited as reaching a $39bn valuation before broad commercial deployment.

Each highlights a different risk: hardware product fragility, burn exceeding revenue by uncomfortable multiples, and valuations racing ahead of cash flows.

Cash flow gravity always wins.

Implications for the UK: costs, compliance, and concentration risk

For UK teams, the pattern has several practical consequences:

  • Total cost of ownership (TCO) – consider not just model/API fees, but energy, networking, data labelling, integration, monitoring, and human-in-the-loop review.
  • Data protection – UK GDPR obligations still apply when you process personal data through AI. Check data residency, retention, and sub-processor chains. See the ICO’s guidance on AI and data protection for principles and risk controls.
  • Vendor lock-in – avoid hard dependencies on a single model or GPU vendor. Design for model-agnostic interfaces where possible.
  • Latency and availability – region and capacity constraints affect user experience and SLAs. Plan for fallbacks.

Regulatory clarity and trustworthy data handling will be differentiators. An AI feature that saves time but creates compliance exposure is a net negative.

ICO: AI and data protection guidance

How to separate signal from hype in AI platforms

The Reddit post critiques “press release platforms” that promise everything but deliver marginal value. When evaluating AI products, look for:

  • Clear job-to-be-done – who uses it and what task is improved?
  • Measurable outcomes – time saved, error rates reduced, conversion uplift, or risk removed. Not disclosed is a red flag at scale.
  • Positive unit economics – gross margin per task remains positive after model, infra, and human oversight costs.
  • Robustness – graceful degradation when the model is slow or wrong; human override; audit trails.
  • Sober roadmaps – incremental value over splashy demos.

Practical steps for UK buyers and builders

For buyers and IT leaders

  • Demand a costed workflow: inputs, model choice, context window, expected tokens, latency, and error handling.
  • Negotiate exit clauses and data portability to avoid lock-in.
  • Pilot with a subset of users and track real productivity gains and error rates before wider rollout.

For startups and product teams

  • Start with smallest adequate models and add RAG to lift quality.
  • Instrument everything: token usage, latency, failure modes, and human review costs.
  • Design for model-switching to keep pricing power with you, not your suppliers.
  • Ship focused workflows that solve a painful, frequent task rather than “platform” abstractions.

If you’re looking for a practical, low-risk entry point, simple automations in familiar tools often beat big platforms. For example, connecting ChatGPT to Google Sheets to automate analysis and reporting can deliver immediate value without exotic infrastructure. I’ve outlined one approach here: How to connect ChatGPT and Google Sheets.

The AI cost stack: where margins leak

Layer What to watch
Model access API pricing, context window limits, latency SLAs, usage caps
Compute GPU availability, spot vs reserved capacity, autoscaling policies
Data Licensing, privacy, retention, retrieval pipeline costs
Energy & networking Data centre region, egress charges, peak vs off-peak usage
People Evaluation, prompt engineering, human review, support
Compliance UK GDPR, DPIAs, auditability, vendor assessments
Distribution Integration, change management, training, ongoing maintenance

What would prove this isn’t a bubble?

  • Consistently positive unit economics on real workloads, not demos.
  • Customer retention and expansion without proportionate compute cost growth.
  • Less headline chasing, more case studies with measured outcomes.

Bottom line

The Reddit critique isn’t anti-AI. It’s a reminder that compute-heavy businesses live and die by unit economics. If your suppliers set your margins, you don’t have a business – you have a trade.

For UK organisations, the path forward is practical: pick focused use cases, insist on measurable value, design for model flexibility, and respect data protection rules. The demand is real. The discipline needs to be, too.

Original discussion: This AI bubble might be nastier than the dot com

Last Updated

October 5, 2025

Category
Views
11
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a yellow-orange background, featuring 'Investing' in bold white letters at the centre and the 'Joshua Thompson' logo below.
Author picture
Ascent Resources PLC signs option to explore Utah lithium and potash brines, a capital-light path with no upfront costs.
This article covers information on Ascent Resources PLC.
Minimalist digital graphic with a yellow-orange background, featuring 'Investing' in bold white letters at the centre and the 'Joshua Thompson' logo below.
Author picture
RTC Group projects resilient FY2025 results in line with 2024, buoyed by a strong order book and debt-free balance sheet amid economic challenges.
This article covers information on RTC Group PLC.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?