Google’s 10-result limit: why the removal of num=100 matters for AI retrieval and SEO
Last month, a Reddit post claimed Google quietly removed the num=100 search parameter, which previously let users see up to 100 results on a single page. The post argues the new hard limit is 10 results. If true, this is not just a UI tweak – it changes how information is surfaced for both people and machines.
You can read the original discussion here: Google just cut off 90% of the internet from AI – no one’s talking about it.
What changed: from 100 results to a 10-result cap
The Reddit post suggests that forcing num=100 in Google Search no longer works and that the interface now caps visible results at 10 per page. This matters because many workflows – human and automated – relied on seeing a broader slice of the SERP (search engine results page), especially positions 11-100 (the “long tail”).
| Aspect | Before (as reported) | After (as reported) |
|---|---|---|
| User-visible results per page | Up to 100 via num=100 | Hard limit of 10 |
| Visibility beyond top 10 | Easy to scan positions 11-100 | Requires more clicks or is less accessible |
| Long-tail content discovery | Broader discovery on page 1 | Concentrated on top results |
“This is not just an SEO story. It is an AI supply chain issue.”
Why this matters for AI retrieval and RAG systems
Retrieval-augmented generation (RAG) is an approach where a model fetches relevant documents at query time and uses them to produce better answers. Many retrieval pipelines lean on search indices to discover and prioritise content. The Reddit post argues that “most large language models like OpenAI, Anthropic, and Perplexity rely directly or indirectly on Google’s indexed results” to feed such systems.
If the visible web narrows to 10 results per query, the long tail becomes harder to access through simple SERP-driven scraping or programme flows built around page 1 depth. In practice, that could mean fewer niche sources make it into AI answers, and more weight goes to well-ranked publishers.
But RAG ≠ only Google
Important caveat: the post is a user report, not a formal statement from Google, and modern AI systems draw on a range of sources – their own crawlers, licensed datasets, community platforms, and alternative search APIs. That said, friction added to any major discovery channel will ripple through downstream pipelines.
SEO and visibility: the long tail gets squeezed
The Reddit author cites Search Engine Land to claim “about 88 percent of websites saw a drop in impressions,” and that sites ranking 11-100 “basically disappeared”. We don’t have a link or independent confirmation within the post, so treat this cautiously. The broader point holds: if scanning top 100 results gets harder, visibility concentrates even further at the top.
For UK startups and SMEs, this intensifies an already tough equation. Organic discovery has been a key early growth lever. If access to long-tail SERP positions is diminished, expect more reliance on:
- Brand, partnerships, and community-led discovery (e.g., forums, newsletters).
- Platform distribution (app stores, marketplaces, aggregators).
- Paid acquisition (with clear ROI discipline).
From an AI perspective, the post claims Reddit citations in LLM outputs dropped, presumably because Reddit often appears deeper in results. If true, that could shift the flavour of model answers towards higher-authority, publisher-led sources.
UK implications: competition, compliance, and practicalities
In the UK, the change intersects with ongoing conversations about competition in digital markets and concentration of power in AI supply chains. If a single platform subtly narrows discoverability, it can have outsized effects on what people and AI systems “see”. While there’s no specific regulation targeted at SERP result counts, the direction of travel in the UK – via the CMA’s Digital Markets Unit work and scrutiny of foundation model ecosystems – is towards ensuring fair access and transparency.
For teams handling personal data in retrieval or crawling, keep GDPR/Data Protection Act obligations in mind. Narrower discovery might push some to build or expand their own crawlers; do so responsibly: respect robots.txt, manage rate limits, and avoid collecting personal data you do not need.
Practical steps for UK developers, SEOs, and AI teams
Reduce over-reliance on a single SERP view
- Diversify retrieval sources beyond one search engine. Consider community content, documentation sites, and domain-specific indexes.
- Strengthen first-party distribution: XML sitemaps, structured data, and fast, well-optimised pages to compete for the top 10.
Build resilient retrieval pipelines
- Design RAG to accept multiple discovery channels (own crawl, curated feeds, user-supplied corpora) rather than mirroring a single SERP.
- Cache and re-rank: don’t just take the first page; maintain your own relevance layer tailored to your users.
Monitor and adapt
- Track impressions and average position in your analytics and search tooling. If long-tail impressions fall, adjust content strategy.
- For operational workflows, lightweight dashboards help you spot shifts quickly. If you live in spreadsheets, this guide shows how to connect ChatGPT with Google Sheets to automate checks: How to connect ChatGPT and Google Sheets.
What we know, what we don’t
What the Reddit post asserts
- Google removed the
num=100parameter; results are now limited to 10 per page. - This reduces what AI retrieval systems can see from the SERP, cutting off the long tail.
- A large share of sites saw impression declines, with positions 11-100 “disappearing” (claimed via Search Engine Land).
- Reddit citations in LLM outputs fell.
Open questions
- Has Google officially confirmed a permanent 10-result cap? Not disclosed in the post.
- To what extent do major LLM providers depend on Google SERPs vs their own crawls and licensed data? Not disclosed.
- Is the reported “88 percent” impression drop broad-based or limited to certain verticals/timeframes? Not disclosed.
Bottom line: algorithmic visibility just got tighter
Whether you view this as a quiet UI change or a structural shift, the practical effect is the same: fewer results visible by default means more competition for the top 10 and more friction for long-tail discovery. For AI, it nudges retrieval pipelines to be less SERP-centric and more multi-source.
For UK businesses, the smart response is diversification – of channels, datasets, and discovery methods – and a renewed focus on clean technical SEO and authoritative content. Don’t wait for confirmation to adapt your playbook. If the long tail has narrowed, plan for it now and build resilience into how your users – human or model – find you.