Why Gen Z Is Pushing Back Against AI—and What It Means for Adoption in 2025

Discover why Gen Z is pushing back against AI and what this means for adoption trends in 2025.

Hide Me

Written By

Joshua
Reading time
» 6 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 114 others ⬇️
Written By
Joshua
READING TIME
» 6 minute read 🤓

Un-hide left column

“The kids hate AI”: what a Reddit thread says about generative AI in 2025

A recent post by /u/Material-Emu-9068 captures a growing mood, especially among people under 20: scepticism. Their summary of conversations with friends and family is stark.

“No one uses it.”

“Anyone who creates art or the like hates it.”

“It’s actively rejected as ‘AI slop’… [by] the below 20 year old group.”

It’s an anecdote, not a dataset, but it resonates. If you live in a tech bubble, you’ll see AI everywhere. Outside it, the vibe can be very different: low day-to-day use, creative pushback, and a preference for human-made work. The poster worries a “bubble” may pop when lack of usage becomes undeniable. Let’s unpack what this means, and why it matters for UK developers, product teams, and curious readers.

What “AI slop” signals: quality, authenticity, and consent

“AI slop” is internet shorthand for low-effort, detectable AI output – generic prose, uncanny images, soulless marketing copy. For many Gen Z users, it’s not just about quality; it’s about authenticity and consent. They can often spot AI in posters, packaging, stock images, and coursework – and they resent it.

  • Quality fatigue: people reject obviously machine-made content in places where they expect craft or care.
  • Creative ethics: artists worry about training on their work without consent or compensation, and they’re vocal about it.
  • Trust: when AI use isn’t disclosed, audiences feel tricked. When it is disclosed, they often prefer the human version.

That’s a tough backdrop for adoption in consumer-facing experiences. If your users can feel the automation, many will simply opt out.

Does this mean the AI bubble will burst?

The Reddit post claims low usage among “normal” people. That’s plausible in many households, but it’s not the whole story. Two things can be true at once:

  • Standalone chatbots may have limited daily use outside tech circles.
  • AI features are quietly permeating tools people already use (search, office suites, photo apps), even if users don’t call it “AI”.

Whether you call it a bubble depends on your lens. Consumer enthusiasm is uneven, but enterprise adoption is moving – sometimes pragmatically, sometimes hype-led. The underlying usage data across the UK is not disclosed here, so treat bubble talk as a hypothesis, not a verdict.

Why some Gen Z users push back

  • Identity and taste: this cohort curates aggressively. Being seen to use “AI slop” can feel like bad taste or poor judgement.
  • School and assessment: plagiarism anxiety, detectors, and unclear policies create a negative association with AI tools.
  • Labour concerns: creatives see displacement risks without fair value exchange.
  • Privacy: younger users are sensitive to tracking and data collection; they ask where prompts, images, and voice data go.

These are rational concerns. If you want adoption, design with them, not against them.

Implications for UK builders and buyers in 2025

Design for trust, not novelty

  • Disclose when and where AI is used. Hidden automation erodes trust.
  • Human-led by default: keep a clear human review step for anything user-facing or reputationally risky.
  • Set a quality bar. If the AI output isn’t better than a human template, don’t ship it.

Respect data and consent

  • Be explicit about training and usage: what data do you collect, and for what purposes?
  • If you fine-tune or store user inputs, give opt-in/opt-out controls and retention limits.
  • UK readers: the ICO’s guidance on AI and data protection is the baseline for lawful, fair, and transparent processing.

Focus on real utility

  • Target high-friction workflows where AI demonstrably saves time or reduces errors.
  • Avoid “AI because AI”. Users won’t pay attention to features that add steps or produce generic results.
  • Measure adoption by repeat use and task completion, not just sign-ups or “time with bot”.

Practical moves to prove value – without the “slop”

The Reddit poster notes seeing ads for very basic AI use cases. If that’s your whole pitch, users will tune out. Instead, pick a specific, boring-but-valuable workflow and make it great.

  • Automate a repetitive report that takes someone 90 minutes every Friday.
  • Improve a customer email response that currently requires context from three systems.
  • Clean and deduplicate messy data before it hits your analytics pipeline.

If you’re already in the OpenAI ecosystem, one low-friction example is connecting a model to your spreadsheets to validate and transform data in place. I’ve outlined a practical approach here: How to connect ChatGPT and Google Sheets. It’s not glamorous, but it’s the kind of workflow that sticks because it saves real time.

How to avoid “AI slop” in creative and public outputs

  • Use AI as a drafting assistant, not a final author. Keep a strong human editorial pass.
  • Bring your own voice: feed the model house style guides, glossaries, and examples to reduce genericness.
  • Credit sources and disclose generation. If you used AI to ideate or draft, say so briefly.
  • Respect rights: do not train on or mimic living artists’ styles without permission.

What to watch in 2025

  • Content provenance: more tools for watermarking and verifying whether images or text are AI-assisted.
  • Creative compensation: licensing and revenue-sharing models for datasets may become normalised.
  • On-device and private AI: local models and edge inference could address privacy concerns for sensitive workflows.
  • Education policy: clearer guidance on acceptable AI use in schools and universities will shape student attitudes.

Bottom line: adoption will favour quiet utility and clear ethics

The Reddit thread reflects a real sentiment: people, especially younger users, don’t want their world filled with detectable, low-effort machine output. If you’re building or buying AI in the UK, assume your audience can tell when you’ve phoned it in – and they’ll reject it.

Adoption in 2025 won’t be driven by splashy demos. It will come from invisible improvements to everyday tasks, transparent practices around data and training, and a genuine respect for human craft. If you can deliver those, you won’t need to convince people. They’ll notice the work gets better – and they’ll keep using it.

Source

Reddit discussion: “The kids hate AI.” by /u/Material-Emu-9068.

Last Updated

December 28, 2025

Category
Views
0
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
This guide explains why AI chatbots are not therapists and offers tips to safeguard your mental health when using them.
Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Evaluating Meta Ray-Ban Smart Glasses after six months, detailing real-world uses, pros and cons, and whether they are worth it.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?