AI bubble or CapEx supercycle? Why the 2025 AI build-out looks different
A thoughtful post on r/ArtificialInteligence argues we are not in an “AI bubble” in the Dotcom sense. Instead, we’re in a capital expenditure (CapEx) supercycle: an intense, front‑loaded build‑out of GPUs, data centres, power, and model development by the richest tech companies on earth.
“This isn’t a bubble waiting to burst into nothingness but a massive, front-loaded investment cycle.”
It’s a useful framing for developers, tech leaders, and investors in the UK. The technology is real, demand is real, but the spend profile is atypical. When the build‑out normalises and ROI expectations tighten, the market will reprice. The question is less “if” and more “who and how badly”.
What makes today’s AI market different from Dotcom bubbles
The Reddit author points to a structural difference: today’s AI build‑out is led by profitable incumbents with products that already scale. That matters for resilience and follow‑through.
- Who’s driving spend: Microsoft, Google, Amazon, Apple, Meta, NVIDIA, Broadcom, and the hyperscalers. Hyperscalers are cloud providers that operate at massive scale (compute, storage, networking) and can amortise big CapEx across huge customer bases.
- What’s being built: GPUs, data centres, power infrastructure, and foundation models. Foundation models are broad AI systems (often large language models, or LLMs) that can be adapted to multiple tasks.
- What’s similar to Dotcom: valuation stretch and rosy expectation curves. The market is pricing in many years of flawless execution in parts of the stack.
In short, there’s product-market fit for core capabilities, but not every company participating shares the same durability.
CapEx supercycle, explained
CapEx (capital expenditure) is long-term investment in assets like chips, servers, and power. A supercycle is an unusually large, prolonged wave of such spending. That fits the current AI moment: build capacity first, monetise later.
“This phase cannot grow linearly forever.”
Once infrastructure saturates and cost pressures bite, spending growth slows, CFOs get stricter, and valuations re-rate. The core tech remains, but the market sorts leaders from hopefuls.
Who wins, who survives, and who struggles in the AI build-out
Using the Reddit author’s framing:
| Category | Who | Why |
|---|---|---|
| Winners | Diversified hyperscalers, cloud platforms, chip makers with real moats, software ecosystems that monetise AI at scale | Distribution, cash flow, pricing power, and defensible IP make returns more likely even as growth normalises |
| Survivors (but volatile) | Model labs, foundation model vendors, second‑tier hardware tied to hyperscaler cycles | Real products and demand, but exposed to buyer consolidation and spend cycles |
| Casualties | AI “feature startups”, firms without defensible tech, anything priced for decade‑long perfect execution | Hard to maintain differentiation; rug pulled when GPU scarcity eases or platform vendors offer similar features natively |
Why this matters for the UK
For UK organisations, this dynamic influences budgets, vendor strategy, and regulatory posture.
- Cloud-first reality: Much AI value will flow through hyperscalers. Expect packaged services to get better, and discounts to reward committed spend. Vendor lock‑in remains a real trade‑off.
- Costs and availability: Compute scarcity and premium pricing won’t last forever. Plan for cost normalisation, but don’t count on it arriving on your project timeline.
- Data protection and sovereignty: UK GDPR and sector regulation still apply. If you use foundation models with customer or health data, get data flows, retention, and redaction practices in writing.
- Energy and infrastructure: Data centre growth depends on power and planning. Even if capacity is offshore, UK latency, resilience, and sustainability targets should inform provider choice.
- Skills and operations: AI value will hinge on integration (APIs, orchestration), governance, and measurement. Invest in MLOps, prompt engineering discipline, and monitoring for drift and hallucinations.
Practical takeaways for teams in 2025
- Treat AI budgets like a portfolio: mix safer platform bets with selective experiments. Do not rely on perpetual GPU scarcity for your business case.
- Prioritise ROI and time‑to‑value: pick use cases with measurable outcomes (customer deflection, lead qualification, coding assistance), then scale.
- Choose defensible partners: favour vendors with durable moats (distribution, IP, ecosystem), and a credible plan to reduce unit costs over time.
- Design for volatility: build abstraction layers so you can swap models or providers without rewriting your stack.
- Governance from day one: document data sources, apply guardrails, and test for bias and hallucinations. Hallucinations are confident but incorrect outputs – they still happen.
- Upskill your team: small automation wins compound. If you’re experimenting with spreadsheet workflows, here’s a guide on connecting ChatGPT to Google Sheets to prototype quickly.
Open questions and risks to watch
- When does the spending curve bend? Timing is not disclosed, but watch hyperscaler guidance and capex commentary.
- Hardware pricing: as supply improves, GPU premiums may ease. That can compress margins for “GPU‑arbitrage” businesses.
- Model economics: context window sizes, latency, and fine‑tuning costs influence unit economics; expect rapid iteration.
- Regulation: transparency, safety, and provenance requirements could raise integration costs, but also level the field.
- Platform risk: if hyperscalers bundle “good enough” AI features, narrow point solutions face margin and churn pressure.
A balanced view: real tech, non-linear path
The Reddit post is right to separate durable infrastructure investment from speculative froth. It’s also right that not all participants benefit equally. The likely path is bumpy: sustained investment, real productivity gains, and periodic repricing as the market tests ROI.
For UK builders, the strategy is clear: focus on measurable value, choose partners with staying power, and keep optionality in your stack. The supercycle may cool, but the capabilities are here to stay.
Read the original discussion
You can read the full thread and join the conversation here: There is no “AI Bubble.” What we’re living through is an AI CapEx Supercycle.