From Six Months to Three Days: Agentic AI and the Future of Software Development

Agentic AI reduces software development time from six months to three days, revolutionising the future of the industry.

Hide Me

Written By

Joshua
Reading time
» 6 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 114 others ⬇️
Written By
Joshua
READING TIME
» 6 minute read 🤓

Un-hide left column

Vibe-coding and agentic AI: 6 months of work in 3 days

A senior developer on Reddit claims they integrated libtorch (the C++ backend of PyTorch) into a bespoke Lisp in three days using an “agentic” AI workflow – after failing to ship a useful wrapper in six months back in 2020. The result reportedly included a working wrapper, documentation, a tutorial, and hundreds of runnable examples to validate each step, compiling on macOS and Linux with MPS and GPU support.

“I implemented in 3 days, what I couldn’t implement in 6 months.”

The developer doesn’t name the model and isn’t selling a tool. They are, however, making a point about capability: modern AI, operating in multi-step, tool-using “agent” modes, can now traverse spotty docs, infer interfaces, and iterate with tests at a pace that is startling even to seasoned engineers.

For UK readers, this story is a clear signal: agentic AI is not just autocomplete. It is a different way of building software.

What is “agentic AI” in software development?

Agentic AI refers to systems that plan, execute, and iterate across multi-step tasks with minimal supervision. Unlike a single prompt/response, an agent can read docs, propose an approach, write code, run it, fix errors, and generate tests and examples. It can call tools (e.g., a compiler, shell, or test runner), keep state, and adjust its plan.

This is distinct from basic code generation. It is closer to a junior developer who can scaffold, try, fail fast, and try again – only much faster and tireless.

Reading between the lines: how did the AI pull this off?

The Reddit post suggests an agentic loop did the heavy lifting. The AI likely:

  • Explored libtorch’s C++ API and headers, mapping them to the Lisp’s FFI or wrapper pattern.
  • Generated and ran small examples at each step to verify bindings and tensor operations.
  • Wrote companion documentation as it went, crystallising usage into a tutorial.
  • Targeted cross-platform builds (macOS/Linux), including Apple MPS and GPU support.

None of this changes the fact that libtorch’s docs can be patchy. It does show that an agent can synthesise across scattered sources, enforce a consistent API surface, and brute-force its way through integration problems with a battery of tests.

External reference: see the official libtorch (C++ API) docs. They are serviceable but not beginner-friendly – exactly the kind of terrain where an agent can help.

Why this matters for UK developers and teams

The productivity delta – 6 months vs 3 days – is not a small marginal gain. While this is a single anecdote, it matches what many teams are seeing with structured AI workflows: scaffolding, porting, integration, and test-writing can be compressed dramatically.

Potential benefits:

  • Speed: integrations, migrations, and wrappers that used to stall on docs can move quickly.
  • Coverage: automatic examples and tests improve confidence, especially across platforms.
  • Documentation: agents write “what they wish they had” while working – a rare win.

Trade-offs and risks:

  • Correctness debt: AI can confidently produce subtly wrong code, especially around memory, concurrency, or numerical edge cases. Tests must be robust and meaningful.
  • Maintainability: autogenerated code can be verbose or inconsistent. Establish style, linting, and constraints up front.
  • Compliance and data protection: uploading proprietary code to third-party services engages UK GDPR obligations. Use enterprise controls, DPAs, and minimise data shared.
  • Licensing and IP: check dependency licences and generated code provenance, particularly if your agent retrieves snippets from the web.

Is “vibe-coding” sustainable?

Vibe-coding is the informal practice of letting an AI infer the right shape of a system from high-level intent rather than rigid specs. It can work brilliantly for glue code and wrappers. It is risky for critical components without tight constraints.

Good teams pair vibe-coding with discipline:

  • Lock down interfaces first: types, invariants, and error handling contracts.
  • Adopt test-first prompts: ask the agent to write tests before or alongside implementations.
  • Run the loop locally where you can: compile, run, and benchmark in a CI pipeline.
  • Code review as usual: treat AI as a tireless pair, not an oracle.

A practical playbook to try this safely

  1. Define scope and constraints: which parts will be autogenerated, which must be hand-written, and what “done” means (APIs, latency, platform support).
  2. Seed the agent: provide minimal, canonical examples of your language’s FFI or wrapper patterns. One good pattern beats ten mixed ones.
  3. Make testing first-class: create a test harness early. Instruct the agent to add runnable examples for each function it binds.
  4. Enforce style and safety: share your linters, formatters, and sanitiser flags. Ask the agent to comply and fix violations automatically.
  5. Compile and run often: keep the agent’s tool access to compiler, test runner, and platform checks (e.g., CUDA, Metal/MPS).
  6. Document as code: ask for a tutorial and API docs that mirror the tested examples, including platform-specific notes for macOS/Linux and GPU/MPS.
  7. Review licensing: record the libtorch version and build flags, plus licences of any transitive dependencies.
  8. Security and privacy: if using a cloud model, strip secrets, perform a DPIA where needed, and prefer models with enterprise privacy guarantees or on-prem options.

If you are exploring lighter-weight automation, I’ve shared a practical guide to connecting AI to everyday tools here: How to connect ChatGPT and Google Sheets with a Custom GPT. The principle is similar: define the workflow, wire in the tools, and let the agent do the legwork.

Costs, models, and availability

Model used: not disclosed.

Costs: not disclosed. In practice, agentic loops can be more expensive than single-shot prompting due to many steps and tool calls. Track tokens/time and set budgets.

Availability: mainstream coding models are accessible in the UK via major vendors and open-source options. Enterprise buyers should confirm data residency, retention settings, and vendor DPAs.

Career impact: should UK developers be worried?

The Reddit author, nearing retirement, worries for the next generation. It’s understandable. Rapid automation changes how value is created. But it doesn’t erase the need for engineers – it shifts it.

Human leverage points are moving towards problem framing, architecture, safety, performance, and integration with real-world constraints. Someone still has to decide what to build, ensure it’s correct, maintainable, secure, and legal – and to steer the agent effectively.

“Agent” is a multiplier for skill, not a replacement for judgement.

Final take

Going from six months to three days is dramatic, but believable when you let an AI plan, code, run, and test in a tight loop. For UK teams, the opportunity is clear: adopt agentic workflows where the risk is manageable, wrap them in strong engineering practice, and treat data protection and licensing as non-negotiable.

If you try this with libraries like libtorch, start small, instrument everything, and insist on tests that actually break when things are wrong. Vibe-coding can be thrilling – but the best vibes are reproducible.

Sources

Last Updated

September 14, 2025

Category
Views
8
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a yellow-orange background, featuring 'Investing' in bold white letters at the centre and the 'Joshua Thompson' logo below.
Author picture
Caledonian’s strategic pivot into financial services, fuelled by fresh capital and two new investments.
This article covers information on Caledonian Holdings PLC.
Minimalist digital graphic with a yellow-orange background, featuring 'Investing' in bold white letters at the centre and the 'Joshua Thompson' logo below.
Author picture
Explore Galileo’s H1 loss, steady cash, and a game-changing copper tie-up with Jubilee in Zambia. Key projects advance with catalysts ahead.
This article covers information on Galileo Resources PLC.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?