A novel written in 2 days: what a pro author’s post tells us about AI and fiction
A verified-sounding novelist on Reddit claims they co-wrote a novel with Claude in under 48 hours, starting from an idea, a synopsis and two short opening chapters. They say the draft is “highly satisfactory” and a week of brisk editing could make it submission-ready – a process that normally takes them six to nine months.
Working “together” we then wrote a highly satisfactory first draft in a day and a half.
Whether you believe the author or not, the claim is plausible with modern large language models (LLMs). And if true, it’s a clear signal: AI-assisted fiction is now a production reality, not a novelty. For UK writers, editors, and publishers, it raises immediate questions about speed, quality, contracts, copyright, and disclosure.
What the novelist did – and didn’t disclose
Stated approach
- Provided Claude with: a high-level idea, two short opening chapters, and a synopsis.
- Iterated “together” with the model to produce a complete first draft in ~36 hours.
- Plans 1-2 quick edit passes to reach a quality suitable for publisher submission.
Not disclosed
- Which Claude model (e.g., Claude 3.5) or context window size was used. A context window is how much text a model can consider at once.
- Prompting method, chapter-by-chapter workflow, or tooling (editors, version control, notes).
- Genre, word count, and final editorial steps (sensitivity reads, legal checks).
Why this matters to UK authors and publishers
Productivity and cost implications
Producing a competent draft in days reframes what’s possible in a typical UK publishing cycle. For authors, it could mean more experimentation and faster development of proposals. For agents and editors, it may change how submissions are judged and triaged.
There’s also the economics. Faster drafting doesn’t guarantee lower total cost – structural editing, voice polish, and fact-checking all still matter – but the balance shifts. Even a modest acceleration compounds across a list.
Quality and originality
LLMs excel at structure, pacing, and “plausible” prose. Voice, subtext, and deep originality still need human intent and refinement. Expect more publishable drafts that still require a human pass to cut clichés, fix character arcs, and land the tone. Repetition and “flat” voice are common failure modes.
Disclosure, contracts, and expectations
UK publishing contracts increasingly contain language around AI use. Some houses want disclosure if AI tools materially shaped the work. If you’re an author, read your contract and ask your agent. If you’re a publisher, be clear on your policies – including crediting, warranties, and indemnities.
Copyright and data protection: UK-specific points
Who owns an AI-assisted novel?
UK law recognises “computer-generated works” where no human author is identified; the author is the person who made the arrangements (Copyright, Designs and Patents Act 1988, s.9(3)) and protection lasts 50 years from creation (s.12(7)). In practice, many AI-assisted works will still have substantial human authorship, in which case ordinary literary work rules (life + 70 years) may apply.
The line between “assisted” and “computer-generated” isn’t bright. Keep robust records of your contribution (outlines, edits, version history) to evidence human authorship.
Sources: CDPA 1988 s.9(3) and s.12.
Data handling with AI tools
If you paste unpublished material into a hosted model, consider confidentiality, rights, and data retention. Check vendor policies and your settings. Anthropic’s privacy policy sets out how data may be stored and processed; business plans typically offer stronger controls than consumer interfaces. If confidentiality is critical, use enterprise-grade deployments or local workflows.
Limits and risks that still apply
- Factual drift and hallucinations – models can insert confident but false details. For non-fiction elements in a novel (dates, places), verify.
- Style flattening – strong, original voice often needs human rewriting. Readers notice when prose lacks idiosyncrasy.
- Genre patterns – LLMs may overfit to common tropes. Without careful steering, plots can feel derivative.
- Bias and representation – models reflect training data. Sensitivity reads and editorial checks still matter.
- Disclosure and trust – misrepresenting authorship can damage relationships with agents, editors, and readers.
A practical AI-assisted fiction workflow (without the hype)
- Outline first. Provide a chapter-by-chapter synopsis and character sheets. Use the model to stress-test stakes and structure, not to guess them.
- Set voice constraints. Paste 1-2 pages of your voice and ask the model to emulate tone, rhythm, and diction while avoiding specific authors.
- Write in scenes. Generate scenes with beats and POV guidance. Keep context windows small and controlled to avoid drift.
- Revise iteratively. Ask for alternatives on weak sections (e.g., “3 sharper lines of dialogue that move the subplot”). Discard a lot.
- Human edit pass. Cut clichés, fix cadence, and rework character motivations. Read aloud. Your voice is the differentiation layer.
- Track provenance. Keep drafts, prompts, and edits. This helps with copyright assertions and publisher disclosures.
If you want to automate chapter tracking, versions, or scene summaries, connecting an LLM to a spreadsheet is an easy win. I’ve written a guide on linking ChatGPT to Google Sheets that you can adapt for Claude or other models.
Timelines at a glance
| Workflow | Time to first draft | Time to submission-ready | Costs |
|---|---|---|---|
| Traditional solo process (author’s baseline) | Not disclosed | 6–9 months | Not disclosed |
| AI-assisted (as per Reddit post) | ~1.5 days | ~1 week | Not disclosed |
What this means right now
For UK professionals, the signal is clear: AI is compressing the drafting phase of long-form writing. It won’t replace the human craft of voice, theme, and editorial judgement, but it will change throughput and expectations. If you write for a living, you’ll likely compete with writers who use these tools well. If you publish, you may see more – and faster – submissions that still need strong editing.
We are all screwed.
I don’t think so. We’re all challenged, certainly. The winners will be the ones who pair strong human taste with fast, careful AI-assisted iteration, and who handle rights, data, and disclosure like professionals.