Scary AI usage at university: what this Reddit post is really about
A short post on Reddit titled “Scary AI Usage” caught my eye. The author, a 22-year-old student, says:
Everyone in my Uni is using AI to breeze through the assignments and it’s scaring me how many important concepts they are skipping.
It’s a familiar worry. Generative AI can help you move faster, but it can also help you miss the point. For UK universities, this isn’t just a cheating problem – it’s an assessment design and skills problem.
If you want the context, here’s the original post: Scary AI Usage on Reddit.
Why this matters for UK students and universities
Most UK programmes still rely on judged understanding: exams, oral defences, lab work, portfolios, and professional practice. If students outsource the thinking to AI, they’ll struggle when the scaffolding disappears. Employers are already tuning their interviews to test reasoning, not regurgitation.
For universities, the issue is two-fold: academic integrity and learning outcomes. If assessment tasks can be completed by an untrained user of a chatbot, that doesn’t just tempt cheating – it signals a misalignment between tasks and the skills we mean to assess.
Cheating versus acceptable assistance: a practical line
Most UK institutions now permit some AI use with disclosure, but blanket rules vary by course. As a working rule of thumb:
- Usually allowed: idea generation, outlining, study aids, code or grammar suggestions, and feedback – if you review, edit, and reference appropriately.
- Usually not allowed: submitting AI-generated content as your own original work, unless the assignment explicitly authorises it with disclosure.
- Always wise: include a short AI usage statement (tool, purpose, prompts used) where permitted.
Be wary of AI “detectors” used to police this. They’re unreliable and prone to false positives. Even OpenAI retired its own AI text classifier citing low accuracy. Policy, assessment design, and transparent student practice will always be more effective than guesswork.
Assessment design in the GPT era: options that still measure learning
If you teach or design modules, you don’t need to ban AI to protect standards. You can design tasks where AI is a tool, not a shortcut:
- Use authentic assessments tied to local or live contexts (client briefs, site-specific data, organisational policies) that require judgement and evidence.
- Assess the process, not just the product: require drafts, version control, a prompt log, and short reflections on how sources, tools, and reasoning changed the work.
- Introduce brief oral checks (vivas) to probe understanding of submitted work – five minutes can reveal depth of learning.
- Provide unique or rotating datasets, images, or case variations so each student must adapt their approach.
- Set in-class build stages (design, plan, prototype) and complete the polish as coursework with disclosed AI support.
- Mark the evaluation: ask students to critique AI outputs, identify errors, and improve them with citations.
- For programming, mark code reviews and tests as much as implementation; include prompts and rationale in submissions.
These approaches don’t eliminate AI – they make it part of the professional workflow you’re trying to teach.
Productive, ethical ways for students to use AI
If you’re a student, AI can raise your game when used deliberately:
- Teach-back: ask a model to quiz you, then explain your answers back. Iterate until you can defend each step.
- Concept repair: paste your own notes and ask for contradictions, missing steps, or alternative proofs – then verify with course materials.
- Code companion: use AI for test generation, boundary cases, or refactoring notes; you still own the design and debugging.
- Prompt transparency: keep a brief appendix of prompts, settings, and how you validated outputs (and cite sources properly).
- Don’t outsource citations: AI can hallucinate references; check databases and the primary literature yourself.
For non-assessed productivity, you can also automate routine tasks. For example, I’ve shown how to connect ChatGPT and Google Sheets to speed up data wrangling. Useful skills – and safely away from graded work.
Privacy, ethics, and UK data protection
Think before you paste. Uploading personal data, client information, or unpublished research to public AI tools can breach confidentiality and UK GDPR (Data Protection Act 2018). Many vendors retain prompts for service improvement unless you opt out or use enterprise accounts.
- Use institution-provided tools when available; they are more likely to have appropriate data processing terms in place.
- Strip or anonymise sensitive data; treat anything you paste into a public model as potentially persistent.
- Review vendor policies on training, retention, and region of data processing. See OpenAI’s note on how your data is used and the ICO’s guidance on AI and data protection.
Takeaway: don’t fear AI – fix the incentives
The Reddit post points to a real problem: AI makes it easier to skip the hard thinking. The answer isn’t panic or magical detectors. It’s clarity on acceptable use, smarter assessment that values process and judgement, and student habits that turn AI into a learning accelerator, not a crutch.
If you’re worried that everyone else is “breezing through”, remember this: shortcuts show up fast in exams, interviews, and real work. Use the tools – but do the learning.