Should ChatGPT Have Ads? Privacy, Trust and the Future of Monetising AI Assistants

Exploring whether ChatGPT should include ads and the implications for user privacy and trust in the future of AI monetisation

Hide Me

Written By

Joshua
Reading time
» 6 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 123 others ⬇️
Written By
Joshua
READING TIME
» 6 minute read 🤓

Un-hide left column

OpenAI testing ads in ChatGPT: privacy, trust, and why it matters in the UK

A widely shared Reddit post highlights a Times Opinion essay by Zoë Hitzig, a former OpenAI researcher, who says OpenAI has begun testing ads in ChatGPT and that she has resigned over concerns about the direction of the product and policy. The claims strike at a core issue for AI assistants: can a tool built on intimate, everyday conversations be safely monetised with advertising?

Below I unpack the key arguments, why this matters for UK users and organisations, and practical steps if you rely on ChatGPT or build AI assistants yourself.

What the post alleges: ads in ChatGPT and a loss of trust

According to the essay summarised in the Reddit post, OpenAI has started testing ads in ChatGPT. The details – ad formats, targeting rules, opt-outs, and data handling – are not disclosed.

“Users have generated an archive of human candor that has no precedent.”

Hitzig argues that advertising layered onto this “archive” risks manipulation at a depth we don’t fully understand. She also rejects the framing that the only choices are either paywalls or ads, suggesting there are better funding models for AI tools that reduce incentives to surveil and profile users.

You can read the discussion here: OpenAI Is Making the Mistakes Facebook Made. I Quit.

Why ads in conversational AI are different from search ads

Search advertising has decades of norms, audit trails, and a clear intent signal: you typed a query. Conversational AI is more intimate, continuous and context-rich. People use ChatGPT for health worries, finances, relationships, and work dilemmas – often in one flowing session. That creates three amplified risks:

  • Deep context exposure: Chat transcripts can contain special category data (health, beliefs, sexual orientation) and highly sensitive metadata about mood and intent.
  • Invisible targeting: If ads are tailored using model inferences, users may not see why they’re being shown something or how to contest it.
  • Persuasive format: Ads integrated into a conversational flow may feel like guidance, not marketing, blurring lines between assistance and influence.

“Advertising built on that archive creates a potential for manipulating users.”

UK lens: GDPR, profiling, and fairness duties

For UK readers, the ICO framework is clear: privacy by design is not optional, and adtech has been under scrutiny for years. Applying that to AI assistants:

  • Lawful basis and consent: Using conversation data for personalised advertising may require opt-in consent, especially if profiling uses special category data. Inferring sensitive traits can itself trigger stricter rules.
  • Transparency: Users must understand what data feeds ad decisions, including retention periods, third-party sharing, and whether training data is involved.
  • Right to object and to be free from solely automated decisions: If ad delivery has significant effects (e.g. financial or health-related nudges), users need accessible opt-outs and review mechanisms.
  • Children and vulnerable users: Additional protections apply; manipulative design is a red flag under the Children’s Code and fairness principles.
  • DPIAs: Any organisation deploying advertising in a chatbot should perform a Data Protection Impact Assessment covering prompts, logs, targeting, and vendor processing.

None of this bans monetisation. It does require careful scoping: contextual ads, strict data minimisation, and strong user controls are safer than behaviourally targeted ads inferred from private chat content.

Legitimate arguments for and against ads in ChatGPT

Potential upsides of ads

  • Access and inclusion: Ads can keep a baseline service free for students, jobseekers, and small businesses who can’t afford subscriptions.
  • Separation of compute from content: Contextual ads that don’t use chat content – for example, fixed placements around the interface – are less risky than adaptive, in-conversation ads.
  • Transparency and control: Clear labels, no mixing with model outputs, and robust opt-outs can preserve user trust.

Main risks and trade-offs

  • Profiling from sensitive data: Even if names are removed, models can infer sensitive traits. Using those inferences for ads invites regulatory and ethical issues.
  • Chilling effects: People may self-censor on medical, legal, or mental health topics if they suspect ads are watching.
  • Blended content risks: Ads that resemble assistant answers could erode trust in all outputs, not just the promoted ones.

Alternative monetisation models that reduce surveillance incentives

Hitzig argues the “paywall vs ads” framing is a false choice. Practical alternatives include:

  • Tiered subscriptions for organisations: Enterprises subsidise consumer access; governance and compliance features justify higher tiers.
  • Usage-based pricing on infrastructure: Charge for API/compute while keeping consumer assistants as a low-cost public good.
  • Contextual-only sponsorships: Fixed, clearly separated placements not powered by chat content or inferences.
  • On-device and open models: For private use cases, local or on-prem models reduce pressure to monetise user data centrally.

What UK teams and developers should do now

If you use ChatGPT in your organisation

  • Review data settings: Disable conversation use for model training where possible; avoid entering special category data unless contractually protected.
  • Update privacy notices: Be explicit about any AI tooling in your workflow, retention of prompts, and whether third-party processors are involved.
  • Segment use cases: Keep consumer assistants away from HR, health, legal, or other sensitive workflows; prefer enterprise contracts with clearer DPAs.
  • Prepare for change: If ads roll out broadly, reassess user experience, internal policies, and whether alternative tools are more appropriate for sensitive tasks.

If you build chatbots or assistants

  • Privacy by design: Data minimisation, short retention, and no behavioural targeting by default. If you show ads, keep them outside the conversational flow.
  • Explainability: Prominent labels for sponsored content and a clear “Why am I seeing this?” link.
  • Controls and logs: Offer simple opt-outs, don’t gate core features behind ads, and maintain auditable logs for how ad decisions are made.
  • DPIA and testing: Run a DPIA, test with vulnerable-user scenarios, and document mitigations.

If you’re prototyping assistants that connect to business tools, I’ve covered a practical integration pattern here: How to connect ChatGPT and Google Sheets with a Custom GPT. The same governance thinking applies: least privilege, clear scopes, and no unnecessary data in prompts.

What to watch next

  • OpenAI disclosures: Look for details on whether ads are contextual or behavioural, how targeting works, and how data is retained or shared. Currently, not disclosed in the post.
  • Regulatory signals: Any statements from the ICO on conversational ads, profiling, and fairness in AI assistants will set expectations quickly.
  • User sentiment: If ads erode trust, usage could shift to subscription tiers, enterprise offerings, or local/open models.

Bottom line

Advertising inside a conversational assistant is not the same as a banner on a web page. The intimacy of chat data, the power of model inferences, and the persuasive format raise the bar for consent, transparency, and fairness – especially under UK GDPR. There are viable ways to keep AI broadly accessible without turning private conversations into ad fuel. If ads do arrive in assistants you use, treat it as a governance event: revisit your risk assessment, update user guidance, and keep sensitive work off consumer tools.

Last Updated

February 15, 2026

Category
Views
0
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a pink background, featuring 'AI' in white capital letters at the center and the 'Joshua Thompson' logo positioned below.
Author picture
Examining if the UK is lagging behind China in AI and key insights for the UK’s AI strategy.
Minimalist digital graphic with a yellow-orange background, featuring 'Investing' in bold white letters at the centre and the 'Joshua Thompson' logo below.
Author picture
Samsung’s RNS flags its 2025 financial statements are now available. The real insights on growth, margins & cash flow are in the detailed PDF for investors to analyse.
This article covers information on Samsung Electronics Co. Ld.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?