Nvidia, OpenAI and “circular funding”: what the viral Reddit post claims
A Reddit post doing the rounds argues that Nvidia is “literally paying its customers to buy its own chips”. The author claims Nvidia has agreed to give OpenAI $100 billion, which OpenAI would then use to purchase Nvidia GPUs. They also point to wider AI infrastructure spending, alleged negative ROI, and bubble risks.
It’s a punchy take, and it’s resonating because many teams are feeling the mismatch between AI promise and day-to-day returns. But several specifics aren’t verified in the post. Below I summarise the claims, where evidence is missing, and what this means for UK organisations planning AI budgets.
Key claims vs what’s verified
| Claim | Detail | Source cited | Verification |
|---|---|---|---|
| Nvidia gives OpenAI $100B which is used to buy Nvidia chips | “Circular” funding loop | Not disclosed | Not verified in the post |
| AI firms need $2T revenue by 2030 to cover infra; will make $1.2T | Gap of $800B | Bain report (not linked) | Not verified in the post |
| OpenAI to burn $115B by 2029; valued at $500B | Never profitable | Not disclosed | Not verified in the post |
| MIT: 95% of companies saw zero ROI from AI | Enterprise ROI | Not disclosed | Not verified in the post |
| Harvard: AI makes workers less productive | Quality/productivity | Not disclosed | Not verified in the post |
| DeepSeek shock wiped $1T in a day; Nvidia -17% | Market reaction | Not disclosed | Not verified in the post |
The Reddit thread is here for context: Nvidia is literally paying its customers to buy its own chips.
What the post gets at: vendor financing, capex hunger and bubble fears
Even if the specific numbers aren’t sourced, the post taps into three real tensions.
1) Vendor financing and circular demand
Big tech and telecoms have historically used vendor financing – where a supplier lends or invests in a customer to accelerate purchasing. It isn’t inherently dodgy; it’s a way to scale a market. But it can mask real demand if overused, and regulators and investors usually scrutinise it closely.
“Nvidia is giving a company $100 billion so that company can buy Nvidia products.”
Whether this particular claim is true is not verified in the post. The broader point: if supplier funding props up demand, revenues can look healthier than underlying end-customer adoption.
2) The infrastructure bill is massive
Training and serving frontier models demands heavy capital expenditure (capex) in GPUs, data centres and power. The post argues that projected AI revenues won’t cover the build-out by 2030. That’s plausible in shape if not in the numbers quoted, and mirrors the dot‑com and fibre build-outs where supply ran ahead of demand, then eventually got used.
3) ROI and productivity are uneven
Return on investment (ROI) varies widely. Many pilots don’t reach production; some deployments deliver clear wins in support, marketing ops, or coding assistants. The post cites MIT and Harvard studies claiming low or negative returns. No sources are provided, so treat those statistics as unverified, but do expect highly variable outcomes depending on use case, data quality, and change management.
Definitions and quick context
- ROI – return on investment. The gain relative to cost.
- Capex – capital expenditure on long-lived assets like data centres.
- Compute – processing capacity to train or run models.
- AGI – artificial general intelligence, roughly human-level capability across tasks.
Why this matters for UK organisations
Cloud bills and cost realism
GPU scarcity and demand cycles hit UK teams via higher cloud prices, quotas, and long lead times. If market demand softens or competition increases, prices can ease – but don’t bank on it. Budget for sustained compute costs and consider model right-sizing and usage caps.
Procurement, compliance and data protection
Under UK GDPR and the ICO’s guidance, you remain accountable for how models handle personal data. Prioritise architectures like RAG (retrieval-augmented generation), which keep sensitive data in your control, and ensure vendors offer UK/EU data residency where needed.
Energy, latency and locality
UK data centre capacity and grid constraints can affect availability and latency-sensitive workloads. If your use case is customer-facing in the UK, factor in latency and potential regional hosting limits.
Pragmatic steps to de-risk AI adoption
Start with narrow, measurable use cases
Pick tasks with clear baselines – first-line support deflection, document search, code review – and measure impact monthly. Kill pilots that fail to meet thresholds; double down on those that do.
Right-size models and spend
- Use smaller, cheaper models for simple tasks; reserve large models for complex reasoning.
- Cache prompts, limit context windows, and consider batching to cut token usage.
- For proprietary data, explore RAG before fine-tuning to reduce training costs and drift.
Avoid lock-in where possible
Abstract your model layer so you can swap providers if prices shift. Monitor open models that offer near‑frontier quality at lower cost, but validate safety and compliance.
Prove value quickly
Spin up a small internal POC, then scale. If you need a simple automation starter, here’s a practical guide: How to connect ChatGPT and Google Sheets.
Questions to ask your AI vendors
- What is your pricing roadmap for the next 12 months? Any committed discounts for sustained usage?
- Where is data processed and stored? Can we enforce UK/EU data residency?
- What’s the documented ROI from comparable customers, and can we pilot on a fixed budget?
- How do you handle model updates, regressions and hallucinations? What guardrails are available?
- If your upstream costs fall, how and when do you pass savings on?
What to watch next
- Earnings calls for disclosures on capex, supply agreements, and any vendor financing structures.
- Cloud provider GPU pricing and queue times – real indicators of demand vs supply.
- Competitive pressure from efficient open-source models that compress costs.
- Shifts from training to inference economics – usage patterns, caching, and on-device/edge deployment.
- Debt markets’ appetite for data centre financing; tighter credit can slow build-outs.
Bottom line
The Reddit post raises legitimate concerns about circular demand, capex burn and uneven ROI, but several headline numbers are not verified in the thread. Treat it as a hypothesis, not a conclusion. For UK leaders, the sensible stance is disciplined experimentation: constrain spend, measure outcomes, keep architectural flexibility, and insist on evidence from vendors.
If a bubble is inflating, disciplined teams will waste less on hype and be ready to scale when costs normalise. If the upside arrives faster than expected, the same discipline will help you compound real gains.