Are we cooked? A developer’s 2026 anxiety, unpacked
A recent Reddit post from /u/kalmankantaja captured a feeling many developers in 2026 will recognise: the shock of seeing paid AI coding tools demolish the need to type much by hand, and the creeping worry that this isn’t just task automation – it’s the automation of thinking.
AI is replacing intellectual activity itself.
The author is considering leaving software for biotech research, but worries AI could swallow that too.
Even though AI can’t generate truly novel ideas yet, the pace scares me.
Let’s ground this. What’s really changing, what still needs us, and how UK developers can build a resilient, high-leverage career over the next few years.
Is AI replacing programmers or programming?
The texture of the change: tasks vs roles
Modern AI coding assistants are based on transformers – models that predict the next token (piece of text) with remarkable accuracy over a large context window (the amount of text they can “remember” at once). They are extremely good at pattern completion, translation between representations (requirements to code, code to tests), and explaining ambiguous code.
That makes them lethal to a large slice of day-to-day software work: boilerplate, wiring, refactors, tests, documentation drafts, and porting between frameworks. It can feel like “intelligence automation” because a lot of programming is text and patterns – and these models excel at text and patterns.
But there’s a difference between automating programming and automating product and system outcomes. Models still struggle with underspecified requirements, hidden constraints, real-world data mess, trade-offs (security vs velocity, cost vs latency), and accountability. These sit outside token prediction – they live in teams, customers, governance and production environments.
What this means for UK developers in 2026
Where AI excels today
- Generating scaffolding: CRUD endpoints, schema migrations, config files, CI stubs.
- Refactoring and tests: quick “make this pure”, “extract function”, and “generate unit tests”.
- Framework fluency: “Show me the idiomatic way” in a stack you don’t know well.
- Explaining unfamiliar code: summarising legacy modules and suggesting pitfalls.
Where human judgment still dominates
- Shaping problems: turning ambiguous asks into crisp, testable requirements.
- System design and trade-offs: cost, performance, security, privacy and operability.
- Compliance-aware engineering: UK GDPR, sector rules (finance, health), data residency.
- Production reality: incident response, SLOs, back-pressure, graceful degradation.
- Cross-team alignment: timelines, dependencies, and saying “no” well.
AI is your new baseline productivity layer. The value moves up the stack: design, integration, data boundaries, safety, and measurable outcomes. Roles are shifting from “type code” to “ship reliable systems that matter”.
UK risks and obligations: privacy, IP and vendor choice
If you’re pasting client code or data into US-hosted tools, you’re potentially in scope of UK GDPR. Build guardrails before habits set in:
- Classify data. Don’t feed personal data, secrets or regulated info into public tools without a lawful basis and a data processing agreement (DPA).
- Prefer enterprise plans with no training on your inputs, audit logs, and region controls where available. If not disclosed, ask vendors directly.
- Track model output licensing for code. Some vendors’ terms differ; don’t assume uniform rights.
- Adopt internal red-teaming and review for hallucinations, bias, and insecure patterns.
Useful reference: the UK Information Commissioner’s Office guidance on AI and data protection.
Biotech as an escape? Or a different kind of bet?
Biotech and research will also feel AI’s tailwinds. Many research tasks are becoming more automatable – literature review, data wrangling, experiment planning support, and analysis. But wet lab work, regulatory processes, and hypothesis-driven thinking keep humans in the loop for the foreseeable future.
If you’re drawn to science, go because you love the domain, not to outrun AI. The most resilient path is likely cross-disciplinary: software skills plus a hard domain (biology, energy, manufacturing, law). You’ll bring leverage wherever the constraints are gnarly and the data is proprietary.
A resilient career plan for 2026–2030
- Own problem shaping: be the person who turns fuzzy asks into tests, data contracts and diagrams. AI then accelerates the build.
- Specialise where constraints bite: security engineering, data engineering, distributed systems, privacy-by-design, reliability, cost optimisation.
- Get AI-fluent, not AI-dependent: prompts, evaluation, retrieval (RAG – retrieval-augmented generation, which lets models ground outputs on your documents), and tool use. Know when to turn it off.
- Ship evidence: short cycles, measurable outcomes, and post-mortems. Your portfolio should show decisions, not just code.
- Control costs: measure token spend, cache responses, prefer smaller models when good enough, and design offline fallbacks.
- Document the “why”: architectural decisions and prompt rationales belong in the repo next to code and tests.
Working with AI without hollowing out your skills
- Start with intent: write a 5–10 line spec or test by hand before you prompt.
- Time-box assistance: use AI for scaffolding and alternatives; do final passes yourself.
- Constrain outputs: specify style, dependencies, security requirements and performance budgets.
- Verify automatically: run tests, linters, scanners. Never trust a green-looking snippet.
- Keep a decision log: one paragraph per change explaining the trade-offs.
For a simple workflow example, see how to connect a model to a spreadsheet in my guide on automating Google Sheets with ChatGPT. The same pattern applies: small, auditable automations with clear inputs and outputs.
Reality check: limits to keep in view
- Hallucinations: confident wrong answers still happen, especially off-distribution or with vague prompts.
- Context brittleness: models can miss long-range constraints even with big context windows.
- Security pitfalls: unsafe defaults (e.g., string concatenation in SQL, weak crypto) creep in unless you ask explicitly.
- Evaluation is hard: you need tests and benchmarks tailored to your domain, not generic leaderboards.
Bottom line: are we cooked?
No – but the kitchen is hot. AI has automated a surprising chunk of programming-as-typing. What it hasn’t automated is the chain from problem to impact: scoping, constraints, ethics, data stewardship, quality and operations. That’s where your edge lives.
If you switch to biotech, do it because you care about the science and are ready to pair domain learning with your software leverage. If you stay in software, move up the stack, get compliance-savvy, and treat AI as a power tool – not a crutch. The work is changing fast, but there’s still plenty of human-shaped ground to own.