Why do people want AGI? A plain-English take on the hype and the hit to jobs
A thoughtful Reddit post asks a blunt question: why would ordinary people want AGI if it mainly benefits model owners and employers looking to cut headcount?
“If AI becomes capable of genuinely replicating any ‘human’ job… basically every single job that doesn’t have a physical component.”
It’s a fair challenge. The tech is exciting. The political economy is sobering. Below is a grounded view of who stands to gain or lose, what’s realistic in the near term, and what matters for a UK audience.
What “AGI” actually means – and what’s realistic soon
AGI (artificial general intelligence) is shorthand for systems that can perform most cognitive tasks at or above human level across domains, not just narrow tasks. Today’s leading models are powerful but still narrow, even if they look general on the surface.
Key limits to keep in mind:
- Reliability and hallucinations – models can produce confident nonsense and need human oversight.
- Safety and alignment – aligning systems with human intent and norms remains an open research area.
- Context and tools – many workflows need data access, permissions, and integrations, not just text generation.
- Liability and compliance – regulated industries (health, finance, legal) need auditability, which current models often lack.
So, in the near term we’ll see steep automation of tasks, not a clean sweep of entire professions. The pace differs by sector and regulation.
Who benefits if we push towards AGI?
- Consumers – cheaper or faster services when routine admin and support are automated.
- Workers who embrace augmentation – higher output per person, especially in document-heavy roles (research, analysis, customer support triage, coding).
- Public services – potential for NHS and local authorities to reduce back-office burdens and waiting times if deployed safely.
- Science and medicine – accelerated research, drug discovery, and simulation, provided data quality and reproducibility are addressed.
- Accessibility – better assistive tools for people with disabilities (speech, vision, translation, summarisation).
These benefits are not automatic. They depend on access (open vs closed), competition, and whether gains are shared with workers and the public.
Who loses or faces disruption?
- Routine cognitive tasks – drafting, summarising, basic analysis, bookkeeping, entry-level coding, and first-line support will be heavily automated.
- Creative production roles – image, audio, and video generation compress cost and timelines; copyright, credit, and training data rights are live disputes in the UK.
- Gig and annotation work – low-paid labelling and review roles are brittle as models improve.
- SMEs without adoption capacity – if only large firms can afford safe integration and compliance, they gain further advantage.
- Democratic oversight – concentration of compute, data, and talent in a handful of firms is a structural risk.
The Reddit concern is valid: without policy, owners of models and capital capture most gains. With policy, outcomes can be more broadly shared.
UK-specific implications: law, regulation, and jobs
The UK landscape is evolving quickly:
- Data protection – the ICO’s guidance on AI and data protection applies to generative AI training and deployment. Expect scrutiny on lawfulness, transparency, and rights to object. See the ICO’s AI guidance.
- Competition – the CMA is reviewing foundation models and market dynamics to prevent bottlenecks in compute, data, and distribution. Read the CMA’s initial report.
- Safety and evaluation – the UK’s AI Safety Institute is developing tests for frontier models, including misuse and systemic risks. More at the AI Safety Institute.
- Regulatory approach – the government is pursuing a “pro-innovation” framework via existing regulators, not a single AI Act-style law. See the AI regulation white paper.
Labour market implications will vary. Roles tied to trust, regulation, field work, relationships, and safety-critical decisions are more resilient. But entry routes into many professions may narrow if the “junior work” is automated.
Why an “average person” might still want advanced AI
There are reasons beyond shareholder gains:
- Personal leverage – faster admin, research, and content creation can reduce drudge work and unlock side projects.
- Lower costs – from legal templates to tutoring, cheaper services expand access if quality and safety are managed.
- Public benefit – if deployed well, shorter NHS queues and better local services are possible.
- New jobs – assurance, evaluation, compliance, policy, AI operations, and domain-specific tooling are growing fields.
But these positives rely on three things: robust worker protections, real competition and access, and practical safety standards.
Practical steps for workers and teams now
- Adopt with guardrails – learn to supervise AI, check sources, and keep a human-in-the-loop for anything high-stakes.
- Automate the boring bits – build small, auditable automations that save hours each week. For example, link models to spreadsheets for reporting and QA. Here’s a simple guide: Connect ChatGPT to Google Sheets.
- Move up the stack – focus on problem framing, data quality, verification, stakeholder comms, and decision ownership. These are harder to automate.
- Invest in scarce skills – safety, security, compliance, integration engineering, and domain expertise paired with AI tooling.
- Know your rights – UK GDPR applies to your data; push employers for transparency on data use, monitoring, and human review.
Policy levers that could make AGI broadly beneficial
- Worker voice – collective bargaining on AI deployment, job redesign, and fair sharing of productivity gains.
- Skills and transition support – funded reskilling, portable training allowances, and robust careers services.
- Competition and openness – prevent choke points in compute and app distribution; encourage open standards and interoperability.
- Transparency and liability – provenance for training data, audit trails for high-risk uses, and clear accountability for harms.
- Public sector adoption – invest in trustworthy AI for NHS and local government to spread benefits beyond the private sector.
What comes next: scenarios and guardrails
- Acceleration – steady capability gains and widespread task automation. Requires strong evaluation, red-teaming, and monitoring to keep services safe.
- Muddle-through – uneven adoption due to legal, ethical, or cost barriers. Productivity gains arrive slower but with more safeguards.
- Shock events – misuse or failures trigger stricter controls. Prepared organisations with clear audit trails will fare better.
Whichever path we take, the social contract matters as much as the models. Without fair distribution, the Reddit scepticism will be right.
Read the original thread and useful resources
- Reddit discussion: Why do people want agi
- ICO – AI and data protection: Guidance hub
- CMA – Foundation models: Initial report
- UK AI Safety Institute: Research and evaluations
- UK government – AI regulation white paper: Policy approach
Bottom line: wanting better tools is not the same as wanting a world where only model owners win. The UK has a window to shape adoption so that advanced AI – whether we call it AGI or not – actually works for people. Let’s use it.