Chinese AI models are surging where US tech is pricey or restricted
A new Reddit post highlights data, reportedly from Microsoft via the Financial Times, showing Chinese AI models – particularly DeepSeek – gaining strong market share in countries where US tech is expensive or constrained by sanctions. The pattern looks stark: when Western models are priced high or unavailable, Chinese alternatives fill the gap.
It’s a reminder that AI adoption is driven as much by geopolitics and pricing as by model quality. For UK organisations, this raises questions about cost, compliance, and resilience in AI procurement.
Where DeepSeek is dominant: market share snapshots
The post lists four striking market shares for DeepSeek:
| Country | DeepSeek share |
|---|---|
| China | 89% |
| Belarus | 56% |
| Cuba | 49% |
| Russia | 43% |
These are all markets where US providers face restrictions or weaker commercial presence. Price sensitivity and availability are doing a lot of work here. The post also suggests sanctions are “inadvertently” creating a user base for Chinese open-source models, which often allow self-hosting or lower-cost deployment.
UAE and Singapore outpace US in AI usage
The same data points to higher overall AI usage in the UAE (~59%) and Singapore (~58%) compared to the US (~26%). That gap is large and, if accurate, underlines how policy and infrastructure shape adoption. The UAE and Singapore have both pushed hard on national AI strategies, public-sector pilots, and business incentives.
| Country | Reported AI usage |
|---|---|
| UAE | ~59% |
| Singapore | ~58% |
| United States | ~26% |
Methodology is not disclosed in the post, so treat these as indicative rather than definitive. Still, they match the broader trend: proactive investment correlates with faster adoption.
Pricing, subsidies, and sanctions: what’s driving this shift
Microsoft claims this is due to heavy Chinese state subsidies undercutting US pricing.
If Chinese providers can sustainably under-price US rivals, that makes them attractive in cost-sensitive markets and for workloads where “good enough” beats “best-in-class”. Sanctions and export controls appear to be another enabler: where US cloud AI is unavailable, local or Chinese options step in.
What “open-source” means here
Open-source AI typically means model weights and code are publicly available for inspection, modification, and self-hosting. In practice, licences differ – some restrict commercial use or require attribution. For adopters, open-source can cut inference costs and avoid vendor lock-in, but it shifts responsibility for security, updates, and compliance onto the user.
Implications for UK organisations
Costs and procurement
Price pressure is real. If Chinese models are substantially cheaper, UK buyers will be tempted. But consider total cost of ownership: support, integration, fine-tuning, safety controls, monitoring, and compliance can outweigh headline token prices.
Compliance and data protection
UK GDPR and sector regulations (e.g., FCA, NHS) require careful handling of personal data, auditability, and clear data flows. Self-hosted or regionally hosted models can reduce data-exfiltration risk, but you still need lawful bases, DPIAs, and vendor assessments. Verify where data is processed and stored, especially with providers outside the UK/EU.
Geopolitical and supply chain risk
Sanctions, export controls, and sudden policy shifts can disrupt access to models, updates, and support. A multi-vendor strategy with clear exit plans is prudent. Keep an eye on licensing changes and model availability in your preferred cloud regions.
Security and safety
All large models can hallucinate – confidently inventing facts – and may carry biases from training data. Evaluate safety layers, content filters, logging, and red-teaming support. For critical tasks, use retrieval-augmented generation (RAG) with verified sources and put human review in the loop.
Practical steps for UK teams right now
- Map workloads by sensitivity: keep regulated data on models you control (on-prem or private cloud) and use public APIs for low-risk tasks.
- Benchmark total costs: include tokens, hosting, fine-tuning, guardrails, observability, and staff time. Don’t compare token prices in isolation.
- Adopt a multi-model approach: mix closed and open models. Have fallbacks in case a provider becomes unavailable due to policy or pricing changes.
- Run pilots with clear success metrics: latency, accuracy, hallucination rate, and cost per task. Kill what doesn’t perform.
- Document compliance: DPIAs, data flows, model cards, and evaluation reports. This saves pain during audits and procurement.
- Start small, integrate where you work: for example, connecting LLMs to spreadsheets or internal tools can yield fast wins without big platform bets. See my guide on connecting ChatGPT and Google Sheets.
Open questions and what to watch
- Methodology transparency: how were usage shares and adoption rates measured? Without clarity, numbers should be treated cautiously.
- Licensing and IP: are “open-source” licences truly permissive for enterprise-use, and what are the obligations?
- Model quality parity: where do Chinese models match or trail US counterparts on benchmarks, safety, and tool integration? Not disclosed in the post.
- Sustainability of pricing: if subsidies are a factor, will prices rise later? Plan for volatility.
- Regulatory direction: UK and EU AI rules, and any new sanctions, could change the calculus on which models are viable.
Why this matters
AI adoption isn’t purely a technology race – it’s also about affordability, availability, and trust. The reported gains for DeepSeek in sanctioned or price-sensitive markets show how quickly the landscape can reconfigure when those variables shift. For UK organisations, the smart move is diversification, rigorous evaluation, and a clear-eyed view of compliance and geopolitical risk.
Sources and further reading
- Reddit discussion: New data from Microsoft (via FT) shows Chinese models are rapidly dominating…
- Financial Times article (as linked in the post): AI usage and model market share analysis (methodology not disclosed in the Reddit summary).