Are AI Tokens the New Commodity? What Jensen Huang and China’s Data Chief Mean for the AI Economy

Explore whether AI tokens are emerging as commodities and what insights from Jensen Huang and China’s Data Chief mean for the AI economy.

Hide Me

Written By

Joshua
Reading time
» 6 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 127 others ⬇️
Written By
Joshua
READING TIME
» 6 minute read 🤓

Un-hide left column

“Tokens are the new commodity”: why two leaders said the quiet part out loud

In the same week, two very different voices framed AI in almost identical terms. At Nvidia’s GTC, Jensen Huang described AI tokens as a commodity and floated the idea of giving engineers token budgets worth half their base salary. Days later, Liu Liehong, head of China’s National Data Administration, called tokens a “settlement unit” and a “value anchor for the intelligent era”.

“Tokens are the new commodity.”

“A settlement unit and a value anchor.”

The Reddit post below connects the dots: both leaders are trying to reprice AI around productive output, not subscriptions. The argument is simple: if tokens are where value is created, they should be budgeted and priced like energy or raw materials – a cost you invest to produce something measurable – rather than a flat SaaS fee.

Read the original discussion on Reddit.

From subscription to settlement: pricing tokens like energy, not software

Tokens are the units models use to process and generate text or multimodal output. Today most buyers encounter them through opaque “usage included” plans or headline per-1,000 token prices. The post argues that’s the wrong mental model.

“The tokens are where the value gets created.”

Think of compute and energy as the crude oil. Tokens are the refined product. They even come in grades:

  • Lightweight inference – fast, cheap outputs (like regular unleaded).
  • Deep reasoning – slower, more capable chains of thought (premium grade).
  • Multimodal – text, image, audio integration (high-octane).

In this framing, token spend is an input to production. If a £400k engineer uses only a trivial amount of tokens, something’s wrong – you’re under-investing in the “fuel” that increases output per head. This matches what the post highlights from GTC:

“I’d be deeply alarmed if a $500,000 engineer consumed only $5,000 in tokens.”

Why now? Subsidies end, value-based pricing begins

The post claims leading labs are burning cash to subsidise usage – OpenAI reportedly projecting $17B cash burn and Anthropic spending around $19B against break-even revenue. Whether or not those specific figures change, the direction of travel is clear: the current model is not sustainable.

Framing tokens as a commodity enables value-based pricing. Organisations can plan token budgets the way they plan cloud spend or electricity. Once buyers measure return per token – not just total cost – prices can move towards what the output is actually worth.

Goldman Sachs research cited in the post suggests around 30% productivity gains on targeted tasks (e.g. support, software development). If that holds, a clear ROI story emerges for disciplined buyers.

A two-tier market: low-cost China, premium US

The post sketches an energy-like market structure. China is positioned as a low-cost producer by converting cheap renewables plus efficient architectures into cheaper tokens. US providers compete at the premium end – better reliability, sovereignty controls, and deeper reasoning.

Provider/tier (as cited) Example price per 1M output tokens Notes
MiniMax (China) $2–$3 Low-cost “regular grade” output
Moonshot (China) $2–$3 Low-cost “regular grade” output
US premium models ~$15 Reliability, sovereignty, deeper reasoning
Context windows/latency Not disclosed Depends on model/version

Crucially, different applications need different grades. A bulk summarisation pipeline? Cheap, regular-grade tokens. A regulated claims decisioning workflow? Premium-grade tokens with auditable behaviour and data controls.

What this means for UK organisations

For UK teams, the “tokens as commodity” idea has immediate, practical implications.

  • Budgeting and procurement – Move from flat licences to consumption budgets with per-team caps, forecast models, and anomaly alerts. Treat token line items like you treat compute or storage.
  • Data protection and sovereignty – Map providers to data flow responsibilities under UK GDPR and DPA 2018. Prefer vendors offering UK/EU processing and strong data processing agreements.
  • FinOps for AI – Establish usage policies, rate cards by task, and chargeback models. Track ROI per workload, not per department headline spend.
  • Model routing – Use cheaper models for low-risk tasks and switch to premium for critical or ambiguous cases. The goal is maximum quality per pound, not minimum cost.
  • Vendor resilience – Prepare for price rebalancing as subsidies unwind. Avoid lock-in with model-agnostic tooling and exportable prompts/evaluations.

How to measure return per token

To make consumption budgets work, you need measurement.

  • Instrument usage – Log tokens in/out per request, user, and task. Track latency, success/failure, and human overrides.
  • Define task-level metrics – For support: resolution time, FCR (first contact resolution), CSAT. For engineering: cycle time, escaped defects, PR review speed.
  • Run A/B pilots – Compare baseline vs AI-assisted flows. Express gains as £ per 1,000 tokens.
  • Automate reporting – Push usage and outcomes into a shared sheet or BI tool with alerts for drift.

If you want a lightweight way to start tracking usage centrally, I’ve shared a practical guide to wire up ChatGPT with Google Sheets for simple dashboards and auditing: How to connect ChatGPT and Google Sheets.

Ethical considerations and risks

  • Hallucinations and bias – Cheaper tokens are not a free lunch. Evaluate quality and safety for your domain. Keep a human-in-the-loop where harm is plausible.
  • Regulatory alignment – Ensure explainability, record-keeping, and DPIAs match your risk class, especially in finance, health, and the public sector.
  • Supply chain fragility – Model policies, rate limits, or region outages can impact operations. Build fallbacks and degrade gracefully.
  • Cross-border data – Low-cost providers may imply data transfers outside the UK/EU. Verify processing locations and data retention terms.

Open questions to watch

  • Standard grades and benchmarks – Will we get common “octane” labels for reasoning depth, latency, and reliability?
  • Settlement mechanics – If tokens are a “settlement unit”, do marketplaces emerge for hedging and forward buying, like energy contracts?
  • Public sector adoption – Can UK procurement frameworks shift from licences to usage with robust guardrails and transparency?
  • Terminology – China’s “ciyuan” (combining “word” with the currency unit “yuan”) hints at a national framing. Will other regions follow?

Bottom line

The Reddit post makes a persuasive case: tokens aren’t a line item to minimise; they’re an input to production. That’s why Jensen Huang and Liu Liehong landed on commodity and settlement language at the same time. As subsidies fade, the winners won’t just build better models – they’ll master the economics of token consumption.

For UK leaders, the next step is practical: set consumption budgets, route tasks to the right “grade” of model, and measure return per token. Once you can say “we spend £X to generate £Y of value”, you’re no longer buying AI as software – you’re operating an AI economy.

Last Updated

March 29, 2026

Category
Views
0
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Merely reviewing AI outputs is ineffective due to cognitive surrender, underscoring the importance of safer human-AI workflows.
Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
AGI requires more than just LLMs, with continuous learning and world models being essential for future progress.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?