Does Sam Altman expect an AI crash? Government guarantees, data centres, and the China question
A popular Reddit thread is asking a pointed question: if OpenAI is confident about AI growth, why would it ask the US government to guarantee loans for data centres? The debate stems from a post by Gary Marcus alleging that OpenAI sought federal loan guarantees and then publicly denied it when backlash hit.
Beyond the outrage, there’s a serious policy question here: should governments backstop the astronomical capital costs of AI infrastructure? And what does a China-led acceleration mean for the UK’s AI competitiveness and risk management?
Original Reddit post: Does Sam Altman expect an AI crash?
What happened: the OpenAI loan guarantee controversy
According to Gary Marcus’s Substack, OpenAI (via CFO Sarah Friar) asked the White House Office of Science and Technology Policy (OSTP) for federal loan guarantees to fund AI data centres. When the request leaked and drew criticism across the aisle, Sam Altman publicly denied the need for guarantees.
we do not have or want government guarantees for OpenAI data centers
Marcus argues this contradicts OpenAI’s prior ask and Altman’s recent comments, which he says were laying the groundwork for government support. You can read Marcus’s piece here: Sam Altman’s pants are totally on fire. Primary documentation beyond that post is not disclosed.
Why loan guarantees matter in AI
Loan guarantees lower borrowing costs by shifting some risk to the state. For AI, that risk sits in eye-watering capex: advanced data centres, scarce HBM memory, power contracts, networking, and long build times. Guarantees are common in energy and infrastructure; they’re far rarer for venture-backed software firms.
Requesting a guarantee doesn’t necessarily mean OpenAI expects a crash. It could be about reducing cost of capital, competing with state-backed rivals, or hedging against tighter credit. But it does imply the capital intensity of frontier AI is now in “national infrastructure” territory.
China’s acceleration: Kimi, Alibaba, and the competitive gap
The Reddit post claims:
- Moonshot’s Kimi – described as a free, open-source model – “gives ChatGPT a run for its money”.
- China is throwing its full weight behind AI, suggesting the US may soon be playing catch-up.
Open-source status and formal benchmarks for Kimi are not disclosed in the post. China is investing heavily in AI labs, training infrastructure, and consumer AI apps, though it also faces export controls on leading-edge chips. The core point stands: global competition is intensifying, and government support – whether via grants, credits, or guarantees – can tilt the playing field.
What this means for the UK: risk, competitiveness, and policy choices
The UK is in a tricky middle position: a leading research hub with strong startups, but without US-level capital markets or China’s state capacity. There are direct implications:
- Compute as national infrastructure – The UK is already backing national compute (e.g. Isambard-AI) and convening policy via the AI Safety Summit. Guarantees would be a step further, sharing financial risk for private buildouts.
- Energy and planning constraints – AI data centres need reliable power and grid connections. UK planning rules, energy pricing, and grid upgrades will decide how much capacity we can host domestically.
- Moral hazard vs competitiveness – Guarantees can lower costs and accelerate innovation, but they also socialise downside risk. Policymakers must define strict eligibility, transparency, and clawbacks.
- Data protection and localisation – UK GDPR and sector rules (health, finance) will push many projects toward UK/EU data residency. Local capacity matters for compliant AI at scale.
- Supply-chain exposure – If the US or China pulls ahead on compute or frontier models, UK firms could face price shocks, access limits, or regulatory fragmentation.
Is Altman hedging against an AI downturn?
It’s possible, but not the only reading. Three plausible interpretations:
- Capex hedge – Frontier training runs and inference fleets are extremely expensive. Guarantees hedge financing and supply risks without implying pessimism.
- Industrial policy alignment – In a US-China race, American firms seek parity with subsidised or state-favoured rivals. This is more geopolitics than market timing.
- Market caution – Credit conditions could tighten. If valuations reset, guarantees provide breathing room. That’s prudent finance, not necessarily “crash” positioning.
Either way, the request underscores how far AI has moved from “software margins” toward infrastructure economics.
Practical takeaways for UK developers and businesses
- Budget like it’s infrastructure – Token costs, context windows, and latency are operational realities. Design for monitoring, cost controls, and graceful degradation.
- Avoid lock-in where you can – Use model-agnostic architectures (e.g. RAG – retrieval-augmented generation – and standard APIs) so you can swap providers as prices and policies change.
- Keep data compliant – Check UK GDPR, sector guidance, and processor locations. Prefer EU/UK regions and explicit data processing terms.
- Track vendor solvency and policy risk – Your AI supplier’s cost of capital and regulatory exposure may become your risk.
- Don’t chase headlines – For many use cases, smaller models or fine-tuning outperform grand, costly LLMs on ROI and latency.
If you’re looking to get hands-on with practical automation, here’s a guide to wire up everyday workflows: How to connect ChatGPT and Google Sheets.
Policy options for the UK: where to draw the line
Should the UK guarantee private AI loans? A balanced approach could include:
- Targeted guarantees for compute and energy efficiency tied to open access for researchers and SMEs.
- Conditionality on transparency, safety evaluation, and responsible release practices.
- Support for domestic model evaluation, safety tooling, and talent pipelines, not just compute spend.
- Incentives for energy-efficient architectures and scheduling that align with grid constraints.
Bottom line
The OpenAI loan guarantee flap highlights a real shift: frontier AI now looks and behaves like critical infrastructure. Whether or not you buy the “crash” narrative, the financing question is legitimate – training and deployment at scale need power, chips, and patient capital.
For the UK, the task is to stay competitive without underwriting blank cheques. Back strategic compute, demand transparency, and keep options open across models and vendors. In a world of geopolitical competition and volatile capital markets, resilience beats hype.