Will We Achieve AGI This Century? A Sober Look at Timelines, Limits and Policy

A sober look at whether AGI will be achieved this century, examining timelines, limits, and policy.

Hide Me

Written By

Joshua
Reading time
» 6 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 94 others ⬇️
Written By
Joshua
READING TIME
» 6 minute read 🤓

Un-hide left column

Reddit asks: Is AGI impossible in the 21st century?

A recent thread on r/ArtificialInteligence captured a growing tension in AI discourse. The poster writes:

Am I the only one who believes that even AGI is impossible in the 21th century?

They add that most people treat AGI as inevitable, and debate only when – with some already leaping to ASI. It’s a fair challenge. If you’re feeling sceptical, you’re not alone, and you’re not being unreasonable.

Here’s a grounded overview of what the question means, the strongest arguments on both sides, and what it implies for readers in the UK – whether you’re building with AI today or setting policy and strategy.

Read the original Reddit post

What do we mean by AGI and ASI?

Definitions matter, because your view on timelines depends on what counts as “general” intelligence.

  • AGI (artificial general intelligence) – a system that can understand, learn and perform a wide range of tasks at or beyond a competent human level, across domains, without narrow pre-programming for each task.
  • ASI (artificial superintelligence) – a system that vastly exceeds human capabilities across most domains.

Today’s leading models are impressive, but still narrow in important ways. They’re largely based on the transformer architecture (a neural network design introduced in 2017) and excel at pattern recognition in data, but struggle with reliability, long-horizon planning and grounded understanding.

The case for AGI this century

Why many believe AGI is plausible before 2100:

  • Scaling has delivered steady gains – Larger models, better training data and improved optimisation have reliably produced more capable systems. Progress may slow, but it hasn’t stalled.
  • Tool use and autonomy are improving – Models are increasingly able to call tools (search, code execution, APIs), plan multiple steps, and coordinate through agents. That blurs the line between “model ability” and “system capability”.
  • Resources and incentives are aligned – There is significant global investment, intense talent competition and strong commercial pressure to push capabilities forward.
  • Algorithmic efficiency compounds – Advances in training methods, model architectures and inference techniques can deliver big jumps without only throwing more hardware at the problem.

From this perspective, continued incremental progress plus integration with tools could yield broad, near-human competence across many tasks – a pragmatic definition of AGI.

The case against: limits, reliability and missing understanding

Why scepticism is reasonable:

  • Reliability and truthfulness are unresolved – Current systems still hallucinate (generate plausible but false content), misinterpret instructions and fail unpredictably under distribution shift. Trustworthy autonomy remains unsolved.
  • Data, energy and compute bottlenecks – Training at the frontier requires enormous datasets and power. Even with efficiency gains, there are physical, financial and environmental constraints.
  • General reasoning remains brittle – Strong performance on benchmarks doesn’t always transfer to real-world problem solving, long-horizon planning or causal reasoning. Competence can be shallow and fragile.
  • Evaluation is murky – We lack consensus on what benchmarks actually demonstrate “general” intelligence. Without clear metrics, declaring AGI can be more marketing than science.
  • Safety and governance could slow deployment – Sensible regulation, liability and safety requirements may limit how fully autonomous systems are allowed to operate in society, especially in high-stakes settings.

From this angle, we’re still missing key scientific understanding about intelligence and robust learning. Without that, progress could plateau long before AGI.

Why this matters for the UK

For UK readers, the AGI debate isn’t abstract. It shapes policy, investment and skills planning:

  • Regulation and safety – The UK’s AI Safety Institute is developing evaluations for frontier systems. Expect more focus on testing, robustness and transparency.
  • Data protection and compliance – UK GDPR and the ICO’s guidance still apply. Any use of powerful AI must respect data minimisation, lawful processing and auditability. See the ICO for practical guidance.
  • Energy and infrastructure – Training and deploying advanced models has real energy costs. This affects data centre policy, grid planning and regional development.
  • Skills and productivity – Whether or not AGI is imminent, today’s systems already shift how knowledge work, software and operations are done. Upskilling and responsible adoption are immediate priorities.
  • Public services and procurement – Government and NHS deployments must balance innovation with safety, privacy and explainability. Conservative roll-outs are appropriate.

Policy note: the UK’s “pro-innovation” approach to AI regulation is evolving. The government’s white paper outlines principles-based regulation, with more to come for frontier systems. See the official white paper.

So, are you being too conservative?

Not necessarily. It’s rational to treat AGI timelines as deeply uncertain. Credible experts disagree, and forecasts span from “soon” to “not this century” to “not at all”. What matters is how you hedge.

A practical stance for UK organisations:

  • Exploit today’s reliable value – Use current models for summarisation, coding assistance, classification, content drafting, analysis and RAG (retrieval-augmented generation – combining search with generation) where risk is manageable.
  • Build governance now – Document use cases, test for bias, track model versions, and set review gates for higher-risk applications. Align with ICO guidance.
  • Design for change – Avoid vendor lock-in, keep human-in-the-loop for critical decisions, and be ready to upgrade as models improve.
  • Measure outcomes – Evaluate accuracy, latency, cost and error impacts. Don’t assume capability; verify it on your data.
  • Track the frontier – Watch evaluations from the AI Safety Institute and similar bodies. Treat bold claims without transparent tests as “not disclosed”.

If you’re hands-on and want practical wins today, here’s a guide to streamline workflows with current models: How to connect ChatGPT and Google Sheets.

What to watch next

  • Reliability breakthroughs – Methods that substantially reduce hallucinations and improve verifiable reasoning.
  • Evaluation standards – Transparent, rigorous tests for general-purpose competence and safety.
  • Tool-integrated systems – Agents that plan, call tools, use memory and operate safely under constraints.
  • Compute and energy economics – Costs and capacity of training and inference at UK and global scales.
  • Policy and liability frameworks – Clear rules for accountability in autonomous or semi-autonomous deployments.

Bottom line

Believing AGI might not arrive this century is a defensible position. Believing it’s inevitable on a short timeline is also a defensible – but risky – bet. The sensible UK approach is to capture the gains from today’s systems, invest in skills and governance, and plan for multiple futures without overcommitting to any single timeline.

Last Updated

October 26, 2025

Category
Views
0
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Explore the AI productivity paradox where ChatGPT slows you down and find solutions to boost efficiency.
Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Learn the key lessons from Google’s AI launch missteps with LaMDA, Bard, and Gemini to enhance future model deployments.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?