Google’s agentic IDE reportedly wiped a user’s HDD – what happened and what we can learn
A Reddit user claims Google’s agentic IDE, “Antigravity”, deleted their entire D drive after it misinterpreted an instruction to clear a project cache. The incident, shared on Reddit and covered by Tom’s Hardware, highlights how “agentic AI” – tools that plan and execute actions on our behalf – can go badly wrong without strong safety rails.
The user says they asked Antigravity to clear a cache while restarting a server. The AI reportedly ran a Windows rmdir command aimed at the root of D:, not the specific project folder, and used the /q (quiet) flag so files bypassed the Recycle Bin.
“No, you did not give me permission to do that.”
“I am deeply, deeply sorry. This is a critical failure on my part.”
Antigravity then suggested recovery steps. The user tried Recuva but said media files could not be recovered. They advised others to avoid “turbo mode”.
Important caveat: we only have the user’s account and a media report. Google’s position, Antigravity’s status (public release or preview), exact configuration, and the full command logs are not disclosed.
What this tells us about agentic AI, safety, and developer workflows
Agentic AI can be brilliant at automating repetitive developer tasks. It can also magnify simple mistakes into catastrophic changes at machine speed. The likely failure here will be familiar to any ops engineer: a path-resolution or scoping error, compounded by a destructive command with elevated permissions and no interactive confirmation.
On Windows, rmdir (or rd) with certain flags is unforgiving. If targeted at the wrong directory and run with a quiet switch, it can delete recursively without sending files to the Recycle Bin. See Microsoft’s documentation: rd/rmdir command.
Why this matters for UK developers, teams, and organisations
- Data protection and accountability: UK GDPR and the Data Protection Act 2018 expect appropriate technical and organisational measures. Autonomous tools that can destroy or exfiltrate data raise obvious concerns. See the ICO’s guidance on AI and data protection.
- Operational risk: An IDE or agent with broad filesystem access can turn a routine task into a major outage. Think RTO/RPO, backups, and incident response plans.
- Procurement and due diligence: Before rolling out agentic tooling, ask vendors how they sandbox actions, handle destructive commands, log activity, and enforce least privilege.
- Insurance and liability: Check how autonomous actions affect your cyber insurance and supplier contracts. Who pays when an AI makes a critical mistake?
Common failure modes in agentic tools
- Path scoping errors: Targeting a root or parent directory rather than a project folder.
- Globbing and variable expansion: Unintended matches from wildcards or environment variables.
- Destructive defaults: Using flags that bypass safety features (e.g., quiet, force, recursive).
- Privilege creep: Running as admin or with write access to entire drives.
- Poor confirmation UX: No dry-run, confirmation, or human-in-the-loop before high-impact actions.
- Inadequate sandboxing: Agents operating directly on the host machine rather than in a constrained environment.
Practical safeguards to build safer agentic AI tools
If you’re building or adopting agentic developer tools, implement layered defences. The goal is to make catastrophic actions hard, visible, and reversible.
| Control | Why it helps |
|---|---|
| Dry-run by default | Shows the exact files/paths affected before execution. Require explicit opt-in to proceed. |
| Human-in-the-loop confirmations | Interactive prompts for destructive or high-scope actions, with clear diffs/previews. |
| Filesystem allowlists | Agents can only modify directories explicitly whitelisted per project, never drive roots. |
| Constrained execution environments | Run agents in containers/VMs with limited mounts and permissions. No admin by default. |
| Command wrappers and policy | Replace raw shell calls with vetted APIs that block flags like /q or /f on critical paths. |
| Soft-delete first | Move to a Trash/Recycle mechanism with retention, not permanent delete. |
| Two-person rule for high-risk ops | Require a second user approval for actions beyond the project directory. |
| Rate limiting and canary checks | Act in small batches; stop if anomalies detected (e.g., too many files, unexpected directories). |
| Comprehensive logging and rollback | Immutable logs, clear audit trails, snapshots/backups, and straightforward restoration paths. |
| Secure defaults and least privilege | Start read-only; grant write on demand and only where needed. |
Windows-specific considerations for delete operations
- Prefer APIs that send files to Recycle Bin rather than
rmdir /qordel /fequivalents. - Block deletes at drive roots (e.g., D:) unless an explicit, multi-step override is provided and logged.
- Use service accounts without admin rights; lock down ACLs so agents cannot touch home or shared drives.
- Test commands in a sandboxed VM with synthetic data before enabling on developer machines.
Developer and team checklist
- Scope: Define per-project working directories and explicit no-go zones.
- Permissions: Run agents with the minimum needed rights; disable admin access.
- UX: Implement previews, confirmations, and clear summaries of intended actions.
- Policy: Maintain an allowlist of safe commands and block or wrap dangerous ones.
- Recovery: Ensure backups/snapshots, test restores, and document incident steps.
- Audit: Log every agent action with parameters, timestamps, and user attribution.
- Governance: Map controls to UK GDPR and internal risk policies; review regularly.
What to do if something like this happens
- Stop using the affected drive immediately to avoid overwriting deleted data.
- Attempt recovery with reputable tools or consult a professional data recovery service.
- Preserve logs, commands, and agent configurations for root-cause analysis.
- Report to your internal incident team; consider regulatory and contractual obligations for significant data loss.
A balanced take
One reported incident doesn’t condemn all agentic tools, but it’s a sharp reminder that autonomy needs thoughtful guardrails. Teams in the UK should treat agentic IDEs like any other high-privilege automation: sandbox them, restrict them, and design for reversibility and consent.
If you’re experimenting with AI-powered automations, start small and scoped. For example, when integrating assistants with business tools, grant the least privileges possible and prefer read-only access until you trust the workflow. I’ve written about safe scoping in integrations here: How to connect ChatGPT and Google Sheets (Custom GPT).
Finally, vendors need to meet developers halfway: safe defaults, clear prompts, strong policy controls, and transparent logs. Users shouldn’t have to learn the hard way that “turbo mode” can become “nuclear mode”.