AI Won’t Make Coding Obsolete: Why Specification and Systems Thinking Still Matter

AI won’t make coding obsolete; specification and systems thinking are still essential for successful software development.

Hide Me

Written By

Joshua
Reading time
» 5 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 115 others ⬇️
Written By
Joshua
READING TIME
» 5 minute read 🤓

Un-hide left column

AI won’t make coding obsolete – the real work is specification and systems thinking

A thoughtful Reddit post argues that the toughest part of software isn’t typing code – it’s deciding exactly what the code should do. I agree. Large language models (LLMs) like GPT and Claude are superb at removing friction, but they don’t erase the need to think clearly about requirements, rules, and trade-offs.

“Typing code is just transcription. The hard work is upstream.”

If you work in a UK team – from fintech and ecommerce to the NHS and local government – the takeaway is simple: treat AI as an accelerator for well-specified work, not a substitute for product thinking or system design.

Accidental complexity vs essential complexity

It helps to separate two kinds of complexity:

  • Accidental complexity – the boilerplate, glue code, setup, and ceremony that come from tools and frameworks.
  • Essential complexity – the real-world rules, constraints, and edge cases that define your product or service.

“Tools like GPT, Claude, Cosine… remove accidental complexity… But it doesn’t touch essential complexity.”

LLMs excel at removing accidental complexity. They scaffold projects, wire APIs, generate tests, and translate patterns across languages. But essential complexity remains. If your system embodies hundreds of rules and exceptions, someone still has to specify them. Compressing those semantics too far simply moves the risk downstream as bugs and surprises.

Why this matters for UK engineering teams

UK organisations face real constraints: GDPR, sector regulators (FCA, Ofcom, MHRA), and public-sector service standards. Those constraints are essential complexity. You can’t vibe-code compliance.

For example, a claims workflow must reflect UK regulatory timelines and auditability. A patient triage tool must encode clinical safety rules and data minimisation. AI will speed up the plumbing, but teams still need explicit models for the domain and crisp acceptance criteria. The ICO’s guidance on generative AI is clear: you remain accountable for data protection and outcomes, regardless of tools.

Practical workflow: specification first, AI second

To get the most from AI without sacrificing clarity, flip your process:

  • State the problem in domain language – who is the user, what decision is being made, under what constraints.
  • Capture rules as decision tables or state machines – enumerate states, transitions, and forbidden moves.
  • Write acceptance criteria and edge cases – include “nasty” inputs, nulls, time zones, and failure paths.
  • Design data contracts – what fields exist, which are required, how they’re validated, and retention rules.
  • Use property-based or generative tests – specify invariants that must always hold.
  • Then ask the LLM to implement, refactor, and generate scaffolding – keep it on a tight leash via tests.

“Strip away the tooling differences and coding, no-code, and vibe coding… collapse into the same job: clearly communicating required behaviour to an execution engine.”

Where LLMs help – and where they don’t

Great uses of GPT/Claude in daily engineering

  • Project scaffolding, build pipelines, and CI/CD templates.
  • API client generation and integration stubs.
  • Schema migrations, mechanical refactors, and performance hints.
  • Test generation from specs and data contracts.
  • Documentation, changelogs, and code comments aligned to your standards.

Hard problems that still need humans

  • Deciding product scope, trade-offs, and non-functional requirements (e.g., latency, reliability, accessibility).
  • Encoding regulatory, legal, and ethical constraints into system behaviour.
  • Designing resilient architectures that handle real usage, failure, and recovery.
  • Owning ambiguity: when the spec is incomplete, choosing the right default and documenting rationale.

Costs, privacy, and compliance for UK teams

Model performance and pricing change quickly and can be opaque across vendors. Specific token costs are not disclosed in the Reddit post. If you’re budgeting, check current pricing on the OpenAI and Anthropic docs. Be disciplined with usage caps per environment and monitor output quality to avoid hidden rework costs.

On data protection, avoid sending personal or sensitive data in prompts unless your vendor provides a compliant processing agreement, data residency controls, and no-training guarantees. Keep secrets out of prompts entirely. For public-sector or regulated workloads, document your risk assessment and adhere to the principle of least data disclosed.

Skills to invest in: the human edge

If coding is becoming cheaper, the premium shifts to analysis and design. Teams should level up in:

  • Requirements engineering – elicitation, conflict resolution, and traceability from policy to code.
  • Domain modelling – ubiquitous language, bounded contexts, and event flows that mirror reality.
  • Decision modelling – decision tables, state charts, and scenario outlines that LLMs can implement and test.
  • Risk and resilience – threat modelling, observability, chaos testing, and graceful degradation.

If you want a practical example of LLMs removing accidental complexity, I’ve covered how to wire an LLM to a spreadsheet without faff: Connect ChatGPT and Google Sheets.

What the Reddit post gets right

The core claim stands up: LLMs are excellent at turning clear intent into working code, and poor at conjuring accurate intent from ambiguous prompts. The risk of “compressed semantics” is real – skipping detail simply defers it until production, where it’s more expensive and visible to users.

For UK readers, the practical implication is to invest more in specification quality and validation, not less. Use AI to accelerate the easy parts, keep humans in charge of the hard parts, and make your requirements executable through tests and models rather than prose.

Final take

AI won’t make coding obsolete because coding was never the hard part. The durable skill is communicating behaviour precisely – to colleagues, to regulators, and, yes, to execution engines. If you master specification and systems thinking, AI will make you faster. If you skip them, AI will make your mistakes faster too.

Last Updated

January 4, 2026

Category
Views
0
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Discover whether AI contributes to water waste through data centre cooling and its effects on the water cycle.
Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Explore how to build safer AI chatbots that mitigate the risk of validating delusions for high-risk users.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?