Google’s $40B Anthropic Bet Explained: What It Means for Amazon, CoreWeave, and the AI Compute Wars

Google’s $40bn investment in Anthropic intensifies the AI compute arms race, challenging Amazon and CoreWeave.

Hide Me

Written By

Joshua
Reading time
» 5 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 131 others ⬇️
Written By
Joshua
READING TIME
» 5 minute read 🤓

Un-hide left column

Google invests $40B in Anthropic, Amazon adds $5B: is this normal in the AI compute wars?

The Reddit thread asks a sharp question: are these eye-watering investments just the new normal in AI, or is Google trying to own both sides of the race? The post claims Google has put $40B into Anthropic, days after Amazon added $5B, and hints there is a catch many write-ups are missing.

Amazon puts in $5 billion. Google follows with $40 billion.

Let’s unpack what’s been shared, why CoreWeave and Amazon matter here, what “Mythos” could refer to, and what UK developers and businesses should watch for as AI infrastructure power consolidates.

What’s actually disclosed in the Reddit post

  • Claimed investments: Google $40B; Amazon $5B (timing: days apart).
  • There is “a catch” in the deal structure that’s being overlooked (details not disclosed).
  • Mentions the CoreWeave angle, the Amazon angle, and something called “Mythos”.

Beyond those points, terms, equity, voting rights, board roles, cloud commitments, and exclusivity are not disclosed in the post.

Is this kind of mega-investment normal?

In today’s AI market, large strategic investments by hyperscalers into model labs are common. They usually aim to secure:

  • Priority access to scarce compute (specialised AI hardware and datacentre capacity).
  • Long-term cloud spending commitments and deeper product integrations.
  • Distribution advantages (e.g., preferred status in a cloud’s AI marketplace).

That said, size and structure vary widely. A big number alone doesn’t tell you much about actual control, future purchase obligations, or where workloads must run. The “catch” hinted at in the Reddit post could relate to any of those – but it isn’t specified.

Why CoreWeave matters in a Google–Anthropic deal

CoreWeave is widely known as a specialist AI compute provider. In an era of tight GPU supply, many labs secure capacity through multi-year contracts, prepayments, or revenue-linked arrangements. If CoreWeave is in the mix here (as the Reddit post suggests), the likely theme is straightforward: securing reliable, scalable, and flexible compute outside a single hyperscaler’s estate.

From a buyer’s perspective, this diversifies risk. From a cloud provider’s perspective, it can be both a hedge and a pressure point – keeping their own infrastructure competitive on price, performance, and availability.

The Amazon angle: competition and multi-cloud realism

Amazon’s $5B (as claimed in the post) signals the classic pattern: model companies often work across multiple clouds. Why?

  • Resilience – don’t get stuck waiting on one provider’s backlog.
  • Negotiation power – better pricing, better terms.
  • Route to market – reach customers where they already run workloads.

For UK teams, the upshot is practical: expect continued model availability across major clouds and direct APIs, but watch the fine print. Each relationship can come with differing SLAs, data handling, and regional availability.

What is “Mythos” here?

The Reddit post references “Mythos” without explanation. Without more detail, what Mythos refers to in this context is not disclosed. It could be an internal project name, a structural vehicle, or something else entirely. If you’re making decisions based on this, get clarity from primary sources before assuming what it means.

Why this matters for UK developers and businesses

Practical implications

  • Compute access and pricing: If large cloud providers deepen ties with leading model labs, pricing power may tighten around a few ecosystems. Keep options open.
  • Data protection: Check where inference and fine-tuning data are processed and stored. For UK/EU workloads, ensure alignment with UK GDPR and your data residency policies.
  • Vendor lock-in: Investments sometimes align with platform incentives. Design your stack for portability – standardised APIs, clean abstraction layers, and exportable prompts/data.
  • Procurement diligence: Ask for clear SLAs, regional failover, and termination/egress terms. Budget for burst and scale scenarios.

Compliance and governance

  • Documentation: Maintain records of data flows, retention, and third-country transfers.
  • Model risk: Catalog model versions, known limitations (bias, hallucinations), and testing processes – especially for customer-facing or regulated use cases.
  • Regulator attention: UK authorities are increasingly focused on cloud competition and AI safety. Expect questions on concentration risk and transparency.

For UK guidance on AI and data protection, see the Information Commissioner’s Office (ICO) resources on AI and data protection (primary source).

Key facts we don’t have from the Reddit post

Item Status
Equity stake, voting rights, or board involvement Not disclosed
Cloud spend commitments or exclusivity Not disclosed
Specifics of any CoreWeave agreement Not disclosed
What “Mythos” refers to Not disclosed

How to think about this if you’re building with AI now

  • Assume multi-cloud: Design so you can switch model providers with minimal friction. Keep prompts and data portable.
  • Benchmark pragmatically: Latency, throughput, context windows (the amount of text a model can consider at once), and cost per 1,000 tokens matter more than headlines.
  • Risk-manage access: Rate limits and waitlists happen. Build graceful degradation paths and caching where sensible.
  • Stay neutral where possible: Avoid hardwiring one vendor’s SDKs across your stack unless you need the deep integration.

So, is Google making a smart play – or overreaching?

Based on the Reddit post alone, the safest read is this: large cloud providers are racing to secure strategic positions in model supply and compute. That can be smart hedging. It can also entrench power and reduce flexibility over time.

Your response shouldn’t be tribal. It should be architectural: maintain optionality, scrutinise terms, and keep real performance and cost data at the centre of decisions.

Further reading and the original discussion

Last Updated

April 26, 2026

Category
Views
0
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Dario Amodei’s 12-month prediction questions the longevity of the frontier model moat in open-source vs closed AI.
Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Jensen Huang warns that workers risk being replaced not by AI but by those who use it; learn how to adapt and stay relevant.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?