LLMs aren’t the final form of AI – and that changes the debate
A popular post on r/ArtificialIntelligence argues that we’re debating AI as if large language models (LLMs) are the endgame. The author points out that LLMs are today’s best-known generative AI tools, but they’re not synonymous with AI as a field, nor are they likely to be its final form.
“LLMs will become to AI what floppy disks became to data centers.”
It’s a useful reminder. Before transformers and LLMs, we had HMMs, GBMs, RNNs, VAEs and GANs – all breakthroughs in their time. The current conversation about jobs, accuracy, capability and risk often focuses on today’s LLM limitations. That’s fair, but it’s also short-sighted if we treat those limits as fixed.
“LLMs are not the final form that AI models will take.”
For a UK audience – from engineering teams to policy makers – the takeaway is simple: plan for a moving target. Don’t set strategy, procurement or regulation on the assumption that this generation of models is the ceiling.
What “beyond LLMs” could realistically mean
LLMs are built on the transformer architecture – a neural network design introduced in 2017 that excels at modelling sequences using “attention” to decide what matters. It’s been astonishingly effective for text, and increasingly for images, audio and code.
But “AI” isn’t one architecture. Historically, model families have risen, hit limits and been complemented or overtaken by new ideas. If you’re thinking ahead, expect shifts such as:
- Multimodal-native systems that reason across text, images, audio and video as first-class inputs, not bolt-ons.
- Agent-style systems that use tools and take actions, not just generate text. Today this is early-stage tool use and function calling; tomorrow it may look more like goal-driven orchestration.
- Retrieval-heavy approaches that ground output in verifiable sources (retrieval-augmented generation, or RAG), reducing hallucinations and improving auditability.
- Neuro-symbolic or hybrid methods that mix learned pattern recognition with explicit rules or structured reasoning.
- Smaller, specialised models deployed on-device or at the edge for privacy, latency and cost control.
None of this diminishes LLMs. It just puts them in context: one powerful technique among many, likely to be combined with others.
Quick jargon check
- Transformer – a neural network architecture that uses attention to model relationships in sequences (see the original paper, “Attention is All You Need”).
- Context window – the maximum amount of text a model can consider at once.
- Fine-tuning – additional training on your data to steer a model’s behaviour.
- RAG (retrieval-augmented generation) – fetching relevant documents at query time and feeding them to the model to improve factual accuracy.
Source: Attention Is All You Need (Vaswani et al., 2017)
Why the “beyond LLMs” view matters in the UK
For UK organisations, the Reddit post’s central point has practical implications across strategy, compliance and talent.
- Procurement and lock-in – Don’t build everything around a single LLM vendor or model. Use abstraction layers and patterns (e.g., RAG) that let you swap components as the landscape evolves.
- Data protection – UK GDPR and the ICO’s AI guidance apply regardless of model type. If tomorrow’s models can ingest richer data or run locally, you’ll still need lawful bases, clear purposes and robust DPIAs.
- Accuracy and auditability – Today’s LLMs can hallucinate. Future approaches that combine retrieval, structured reasoning or verified tools may improve reliability and traceability. Design for audit now.
- Workforce impact – Claims like “AI won’t replace you” are often rooted in current LLM limits. As capabilities shift, job impact will vary by task, not job title. Focus on redesigning workflows, not blanket predictions.
- Skills planning – Prioritise durable skills: data quality, evaluation, prompt and tool design, model governance and security. These outlast any single model family.
Practical steps to avoid LLM tunnel vision
- Architect for change – Separate data, retrieval, reasoning and action layers. A RAG-first pattern gives you flexibility to swap models and improve grounding.
- Evaluate continuously – Track accuracy, latency, cost per task and safety metrics across models. Re-run evaluations as new models arrive.
- Keep data private by default – Minimise data sent to external APIs, and consider on-device or VPC-hosted options where feasible. Document your processing under UK GDPR.
- Use tool use judiciously – Give models controlled tools (search, calculators, internal APIs) to reduce hallucinations and improve utility. Monitor and log tool calls.
- Design for human-in-the-loop – Critical outputs (legal, medical, financial) should have human review. Build feedback loops to improve prompts, retrieval and fine-tunes.
- Invest in workflow integration – The productivity win often comes from where the model meets your data. If you’re automating reporting, here’s a practical guide: Connect ChatGPT with Google Sheets using a Custom GPT.
On sentience, agency and existential claims
The Reddit author notes that arguments about AI’s limits often assume today’s LLMs – and that future systems might behave differently. That’s true, but speculation can run hot quickly. Present-day LLMs do not have goals, desires or awareness. They generate patterns based on data, and they can be steered by prompts and tools.
Could future systems take more autonomous actions? Yes, if we give them tools and authority. That’s why safety, oversight and scope control matter. Keep autonomy tightly scoped, log actions, and put humans on the loop for consequential decisions.
“We need to have conversations about the impact of AI in society without being limited to thinking about LLMs.”
That’s the right frame: plan for capability growth without assuming either stagnation or science fiction. UK organisations should align experimentation with proportionate governance. The ICO’s resources on AI and data protection are a good starting point: ICO – AI guidance.
Bottom line
The LLM era has unlocked enormous value, but it’s a phase, not the finish line. Treat LLMs as one component in a modular AI stack, prepare for hybrid approaches, and keep your governance and evaluation grounded in outcomes. If you design for change now, you won’t have to rebuild when the next wave arrives.
Original Reddit discussion
Read and join the conversation: We are debating the future of AI as If LLMs are the final form by /u/Je-ne-dirai-pas.