AI, Authenticity and Trust: Deepfakes, AI‑Written Content and What Comes Next in 2025

Explore the challenges of authenticity and trust in AI for 2025, including deepfakes and AI-written content.

Hide Me

Written By

Joshua
Reading time
» 6 minute read 🤓
Share this

Unlock exclusive content ✨

Just enter your email address below to get access to subscriber only content.
Join 114 others ⬇️
Written By
Joshua
READING TIME
» 6 minute read 🤓

Un-hide left column

“AI is ruining everything”: what this frustration gets right (and wrong)

A recent Reddit thread captures a growing unease with generative AI. The poster worries about priests using ChatGPT to draft sermons and about hyper‑realistic AI images and videos eroding trust in anything we see online.

“Imagine going to a church and you’re basically worshipping to the words of an AI.”

The concern is understandable. Two trends are converging: AI‑written text in places where authenticity matters, and synthetic media (deepfakes) that can convincingly imitate reality. Both challenge how we judge credibility, intent and authorship.

For a UK audience, this isn’t abstract. Our institutions – churches, schools, councils, newsrooms – rely on trust, and our information environment is already strained by misinformation. The question is not whether AI is used, but how it’s used, disclosed and governed.

AI‑written sermons and public trust: authorship, disclosure and duty of care

Clergy, teachers, civil servants and journalists are all experimenting with AI tools as drafting aids. The ethical issues are similar across these roles: stewardship of authority, respect for audiences and accurate attribution.

Transparency is the minimum

When a person speaks from a position of trust, we expect them to be the author. If AI contributed materially – for structure, arguments or full paragraphs – disclose it. A simple note such as “Drafted with AI assistance; reviewed and edited by [Name]” preserves honesty without banning useful tools.

Retain the human voice and responsibility

AI can help brainstorm or de‑jargonise, but it cannot know the context, congregants or pastoral needs. Final responsibility should remain with the human. That means verifying facts, checking tone, and ensuring the message reflects lived experience rather than generic text.

Guard against bias and errors

Generative models can “hallucinate” – produce fluent but incorrect statements. They also reflect biases in their training data. For any public‑facing content (sermons, lectures, briefings), fact‑checking and sensitivity review are non‑negotiable.

Deepfakes and synthetic media: why the trust gap is widening

“Deepfakes” are AI‑generated or heavily edited images, audio or video made to look authentic. Advances in model quality and the scale of online data mean synthetics are cheaper, faster and more believable.

Why detection is hard

Automated detectors can help, but there’s no perfect tool. As generation improves, detection models play catch‑up. That arms race favours attackers over time.

Provenance and verification beat “spot the artefact”

Instead of hunting for visual glitches, look for provenance – who captured it, when, and with what device – and corroboration from trusted sources. The C2PA standard adds cryptographic “content credentials” at creation to show edit history. You can inspect these using the official verifier where available.

Practical steps you can take now

  • Reverse image search: use Google Images/Lens or TinEye to see if a photo/video frame appeared earlier in another context.
  • Check content credentials: where supported, inspect C2PA/Content Credentials for capture device and edits.
  • Use verification tools: the EU‑funded InVID plugin helps with keyframes, metadata and social context.
  • Seek reputable corroboration: is the claim reported by established outlets with named sources? If not, treat with caution.
  • Mind the incentive: sensational clips that travel without provenance are higher‑risk. Slow down before sharing.

What the UK is doing: rules, guidance and standards

The UK’s approach to AI governance is regulator‑led rather than one big AI law. Several bodies already have relevant powers.

  • Ofcom and the Online Safety Act: Ofcom is developing codes of practice for platforms to reduce illegal content and harms, including manipulated media. See Ofcom’s Online Safety hub for the roadmap.
  • ICO guidance on AI and data protection: organisations using AI must justify data use, assess risks, and be transparent. Read the ICO’s guidance on AI.
  • CMA and competition in foundation models: the competition regulator is monitoring how big players shape the market and consumer outcomes. See the CMA’s foundation models work.
  • NCSC secure AI development: security guidance for anyone building or deploying AI systems. See the NCSC’s secure AI guidelines.

Internationally, the EU AI Act includes a disclosure duty for deepfakes; many UK organisations will align to that standard to operate across borders. Meanwhile, industry‑backed provenance via C2PA is gaining traction in cameras, newsrooms and creative tools.

Using AI without losing your voice: a simple policy you can adopt

Outright bans rarely work. Practical, transparent norms are better. For teams and institutions, consider adopting these rules:

  1. Disclosure: if AI contributed beyond light proofreading, say so. Keep a one‑line note for public materials.
  2. Review: a named human must read, fact‑check and approve every AI‑assisted output before publication.
  3. Provenance: preserve drafts and sources. For images and video you create, enable content credentials where possible.
  4. Data protection: avoid pasting personal or sensitive data into public AI tools unless you have a clear lawful basis and processor terms.
  5. Scope: define where AI help is allowed (e.g., brainstorming, outlines) and where it is not (e.g., safeguarding decisions, pastoral counselling notes).

Tooling for traceability and audit

One easy win is to log prompts and outputs for anything you publish. It helps with accountability and training. If you already live in spreadsheets, you can wire AI outputs into Google Sheets and keep a record for audit. Here’s a practical guide: Connect ChatGPT and Google Sheets (Custom GPT).

Why this matters for the UK information space

Trust is a public good. Whether you’re a priest, a press officer, a developer or a content creator, your audience will increasingly ask two questions: did a human stand behind this, and can I verify it? Meeting that expectation doesn’t mean rejecting AI; it means documenting and disclosing how you used it, and building provenance into your workflow.

“It just feels like AI is being used for all the wrong reasons at the moment.”

It can be used for the right ones too – making complex topics understandable, speeding up research, and improving accessibility. The line is simple: when authenticity matters, be transparent; when evidence matters, show it.

Further reading and the original discussion

Last Updated

December 7, 2025

Category
Views
6
Likes
0

You might also enjoy 🔍

Minimalist digital graphic with a yellow-orange background, featuring 'Investing' in bold white letters at the centre and the 'Joshua Thompson' logo below.
Author picture
Caledonian’s strategic pivot into financial services, fuelled by fresh capital and two new investments.
This article covers information on Caledonian Holdings PLC.
Minimalist digital graphic with a yellow-orange background, featuring 'Investing' in bold white letters at the centre and the 'Joshua Thompson' logo below.
Author picture
Explore Galileo’s H1 loss, steady cash, and a game-changing copper tie-up with Jubilee in Zambia. Key projects advance with catalysts ahead.
This article covers information on Galileo Resources PLC.

Comments 💭

Leave a Comment 💬

No links or spam, all comments are checked.

First Name *
Surname
Comment *
No links or spam - will be automatically not approved.

Got an article to share?