IBM CEO: “No way” trillions on AI data centres pay off at today’s infrastructure costs
An eye-catching claim made the rounds on Reddit: IBM CEO Arvind Krishna told the Decoder podcast that today’s AI infrastructure economics don’t stack up. His view is blunt – at current costs, there’s “no way” spending trillions on AI data centres will generate a profit.
He added a back-of-the-envelope calculation to make the point:
“$8 trillion of CapEx means you need roughly $800 billion of profit just to pay for the interest.”
Krishna also expressed scepticism that current techniques will reach artificial general intelligence (AGI) – putting the probability at 0-1%.
Original discussion: Reddit thread (source not disclosed in post).
Breaking down the “napkin maths” on AI data centre spending
CapEx (capital expenditure) refers to upfront spending on physical assets like chips, servers, buildings, networking, and power infrastructure. Krishna’s point is about the financing burden that follows when you deploy vast sums at today’s prices and interest rates.
| Metric | Figure (as cited) | Notes |
|---|---|---|
| Total CapEx | $8 trillion | Aggregate spending on AI data centres |
| Implied annual profit needed for interest | $800 billion | Assumes heavy debt financing; exact rate not disclosed |
The argument isn’t that AI has no value – it’s that the combination of high hardware costs, energy, and financing makes the hurdle rate uncomfortably high if you scale indiscriminately. Profit must come from somewhere: model subscriptions, API usage, enterprise licences, or massive productivity gains. If those revenues don’t grow as fast as CapEx, investors will balk.
What actually drives AI data centre costs today?
Even without exact figures, it’s clear where the money goes:
- Chips and servers – training and inference accelerate on GPUs and custom accelerators that are expensive and in high demand.
- Power and cooling – dense compute requires steady electricity and advanced cooling systems, both operationally intensive.
- Networking – fast interconnects are essential for training large models at scale.
- Buildings and grid connections – land, construction, and power availability all add constraints and costs.
- Software stack – orchestration, storage, observability, and security add ongoing licence and engineering costs.
- Financing – higher interest rates make large-scale builds more expensive to carry.
Training (teaching a model from data) and inference (running the model to produce outputs) both incur costs. Training is capital-intensive; inference costs scale with usage. Without careful design, either can blow the budget.
Will current techniques achieve AGI? IBM’s 0-1% view
AGI (artificial general intelligence) refers to systems that can understand, learn, and generalise across tasks at or beyond human level. Krishna’s take is stark:
AGI likelihood from current tech: 0-1%.
Why it matters: valuations sometimes assume breakthroughs that may not arrive on schedule. If near-term progress is incremental, the case for multi-trillion CapEx weakens, and the focus shifts to concrete, profitable use cases rather than moonshots.
Implications for the UK: costs, compliance, and capacity
Pricing and availability
UK organisations already feel cost pressure from usage-based AI pricing and limited supply of top-tier accelerators. If hyperscalers moderate their buildouts, expect prices to remain firm and capacity to be prioritised for large, committed customers.
Data protection and residency
UK GDPR obligations don’t disappear with AI. If training or inference involves personal data, you need clear lawful bases, retention policies, and supplier assurances. Data residency options and auditability will matter for regulated sectors.
Sustainability and reporting
Energy use and carbon accounting are increasingly material. Boards will ask whether AI projects deliver measurable ROI and align with sustainability targets, not just headline-grabbing pilots.
Designing a realistic AI roadmap without burning cash
- Start with focused, high-ROI use cases – customer support deflection, document search, code assistance, and analytics summarisation.
- Right-size your models – smaller, fine-tuned models often beat giant general models on cost and latency for specific tasks.
- Use RAG (retrieval-augmented generation) – keep proprietary data in your control and feed only relevant snippets to the model. RAG reduces hallucinations and cost.
- Instrument cost per outcome – track cost per ticket resolved, per lead qualified, or per page summarised, not just per token.
- Cache, batch, and compress – response caching, batching requests, and quantisation can materially cut inference bills.
- Consider hybrid architectures – keep sensitive workloads on-prem or in VPCs; burst to cloud when needed.
- Avoid lock-in – use portable formats and APIs so you can switch models or vendors as prices and performance change.
Practical next step: automate with tools you already use
If you’re experimenting, start where value is obvious. For example, connecting GPT to spreadsheets for reporting or data cleaning can pay back quickly. See: How to connect ChatGPT and Google Sheets with a Custom GPT.
What could shift the AI data centre economics?
- Cheaper, more abundant hardware – improved supply chains and competitive accelerators could lower unit costs.
- Algorithmic efficiency – better fine-tuning, sparsity, distillation, and quantisation reduce compute needs without sacrificing quality.
- Energy innovations – cleaner, cheaper power or better cooling improves operating margins.
- Higher utilisation – multi-tenant scheduling and smarter job placement keep expensive assets busy.
- Business model clarity – bundling AI into products people already pay for can translate usage into predictable revenue.
Balanced take: why Krishna’s warning matters
On the one hand, the demand for AI capabilities is real and growing. On the other, economics matter. If the industry chases scale without matching revenue and efficiency, the return on trillions in CapEx looks shaky – exactly the concern Krishna highlights.
For UK teams, the lesson is simple: prioritise practical wins, design for cost control, and keep your architecture flexible. The technology will keep improving, but your budget won’t magically expand with it. Build for the value you can measure today, and you’ll be ready to scale when the economics truly make sense.