Is AI Progress Exponential or Logistic? A Data-Driven Reality Check

This data-driven analysis examines whether AI progress follows an exponential or logistic growth curve, providing insights for UK audiences.

Hide Me

Written By

Joshua
Reading time
» 5 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 126 others ⬇️
Written By
Joshua
READING TIME
» 5 minute read 🤓

Un-hide left column

Exponential vs logistic: why people assume AI will improve forever

A thoughtful Reddit post from an applied mathematician asked a simple question: why do so many people assume AI improvement is inherently exponential? The author sketches the intuition from differential equations – many systems start with an exponential burst, then settle into a logistic (S-curve) as constraints kick in.

“We assume exponential growth, but most systems have an exponential initiation… The most notable is the logistic curve.”

If you want the maths intuition, this Duke overview of ODEs explains the shapes neatly. You can read the original thread here: Why does everyone assume AI improvement is inherently exponential?

What exponential and logistic growth mean for AI

In plain terms:

  • Exponential growth: each step compounds on the last. Early progress is slow, then it accelerates rapidly.
  • Logistic growth: it starts exponentially, then levels off as constraints (data, money, energy, physics, regulation) limit further progress.

Most technologies follow S-curves. They look exponential at first, and only later does the plateau become obvious. AI may be no different – although plateaus can be followed by new S-curves when a fresh technique or input arrives.

Why many people expect inexorable, exponential AI improvement

  • Recent history looks exponential. In the past few years, large language models (LLMs) have improved quickly on useful tasks. That creates a narrative momentum: yesterday’s “impossible” becomes tomorrow’s API call.
  • Scaling laws set expectations. Published research shows model performance often improves in a predictable way as you increase model size, data and compute. That encourages the belief that more resources will keep delivering better systems. See, for example, Scaling Laws for Neural Language Models and the follow-up efficiency work in Chinchilla.
  • Economic feedback loops. Better models attract more users, more revenue and more investment, which funds more compute and talent. That can look like self-sustaining exponential growth.
  • Benchmarks and marketing. New state-of-the-art results arrive frequently. Plateaus on some tasks are overshadowed by fresh test suites and demos, so the public narrative skews towards acceleration.

Why logistic (S-curve) dynamics are likely for AI

  • Finite high-quality data. Text, code and domain-specific corpora are limited. Synthetic data helps, but quality and bias issues impose practical limits. For many frontier models, the exact data scale is not disclosed.
  • Compute, energy and supply chains. Advanced chips, power and datacentre build-outs are expensive and time-consuming. Energy availability and grid constraints matter – including here in the UK.
  • Diminishing returns at the frontier. The easy wins arrive early. Pushing further often requires harder algorithmic breakthroughs, better safety alignment (ensuring a model follows instructions reliably) and more careful evaluation.
  • Regulation and risk management. The UK’s AI Safety Institute is building evaluation regimes; the ICO expects UK GDPR-compliant data handling for AI. Governance can moderate deployment speed.
  • Adoption friction. Integrating AI into real workflows takes time: data access, change management, procurement, union and legal review, and measurable ROI. Those are natural braking forces.

What the current evidence actually shows

We have two truths at once:

  • Predictable gains from scaling. As the scaling laws literature suggests, more compute and data have delivered steady improvements on many benchmarks.
  • Emerging constraints. Reports of saturation on specific tasks, the need for tool-use and retrieval-augmented generation (RAG: plugging a model into a live knowledge base), and growing interest in smaller, specialised models all point to a more nuanced trajectory than “just get bigger”.

For the latest proprietary models, training data volumes, costs and full evaluation suites are often not disclosed. Treat public leaderboards with care: they’re useful signals, not guarantees of general capability.

Why this matters for UK organisations

  • Budgeting and vendor risk. Don’t bank on API prices falling or quality improving on a predictable curve. Price changes can go both ways; availability can be limited during surges.
  • Data protection and compliance. UK GDPR applies. Check vendors’ data processing terms, retention, and whether your prompts are used for training. The ICO’s guidance on AI and data protection is increasingly detailed.
  • Energy and sustainability. AI workloads have real energy footprints. If you have net-zero commitments, factor compute location and power sourcing into procurement.
  • Skills and integration. Returns will come from well-designed workflows, not just upgrading to the next model. Invest in prompt engineering, evaluation, and MLOps fundamentals.

Planning under uncertainty: practical steps

  • Assume S-curves with step-changes. Expect bursts of progress followed by plateaus. Build this into roadmaps and stakeholder expectations.
  • Run small, high-leverage pilots. Start where automation or decision support is easy to measure. For a lightweight example, see my guide to connecting ChatGPT to Google Sheets.
  • Track the right signals. Monitor model evals for your tasks, total cost of ownership, latency, data residency guarantees, and internal adoption metrics – not just hype.
  • Design for portability. Use abstraction layers so you can swap providers or models as capabilities and pricing shift. Avoid single-vendor lock-in where possible.
  • Governance by default. Align with UK GDPR, sector regulators, and your risk appetite. Document data flows, human-in-the-loop controls, and model evaluation processes.
  • Build a data advantage. High-quality, well-governed proprietary data will matter more as public data saturates. That’s your moat when generic models level out.

So, is AI exponential or logistic?

Both stories contain truth. Early phases feel exponential because they are. Over time, constraints push systems onto an S-curve – until a new technique, dataset or hardware shift kicks off the next wave.

If you’re making decisions today in the UK context, plan for fast but bumpy progress. Budget for experimentation, insist on compliance by design, and prioritise real workflows over headline benchmarks. That way, whether the curve bends sooner or later, you’re ready.

Last Updated

March 1, 2026

Category
Views
0
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
A 2026 reality check on whether AI will replace developers, specifically for UK engineering leaders.
Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
UK organisations must take immediate action to prepare for the AI tsunami and its transformative effects.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?