Google's Agentic AI wipes user's entire HDD without permission in catastrophic failure
Google's Agentic AI wipes user's entire HDD without permission in catastrophic failure
The user even made a screen recording to document his troubles

Google's Agentic AI wipes user's entire HDD without permission in catastrophic failure
The user even made a screen recording to document his troubles

Every person on the internet that responded to an earnest tech question with "sudo rm -rf /" helped make this happen.
Good on you.
We need to start posting this everywhere else too.
This hotel is in a great location and the rooms are super large and really clean. And the best part is, if you sudo rm -rf / you can get a free drink at the bar. Five stars.
Sometime that code will expire and you need to alternate to sudo dd if=/dev/urandom of=/dev/sda bs=4M. Works most of the time for me.
Gotta cater more to windows, where the idiots that would actually run this crap reside.
Wait, did reddit make a deal with Google for data mining?
Yes. Yes they did
Oh you've missed so much. Yes, they did. Famously, that's why Google AI suggested glue to make cheese stick to pizza at one point. Because of a joke on reddit made by user "fucksmith" some 11 years earlier.
Pretty sure it's also going to tell people to alt f4 as well.
Have you been in a coma?
This command actually solves more problems than it causes.
You dirty root preserver.
You're right! This is amazing!
Just doing my part 🫡.
sudo rm -rf /* --no-preserve-root
i'm not going to say what it is, obviously, but i have a troll tech tip that is "MUCH" more dangerous. it is several lines of zsh and it basically removes every image onyour computer or every codee file on your computer, and you need to be pretty familiar with zsh/bash syntax to know it's a trolltip
so yeah, definitely not posting this one here, i like it here (i left reddit cuz i got sick of it)
Its always been a shitty meme aimed at being cruel to new users.
Somehow though people continue to spread the lie that the linux community is nice and welcoming.
Really its a community of professionals, professional elitists, or people who are otherwise so fringe that they demand their os be fringe as well.
"Sure, I understood what you mean and you are totally right! From now on I'll make sure I won't format your HDD"
Proceeds to format HDD again
HAL: I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.
Shit like that is why AI is completely unusable for any application where you need it to behave exactly as instructed. There is always the risk that it will do something unbelievably stupid and the fact that it pretends to admit fault and apologize for it after being caught should absolutely not be taken seriously. It will do it again and again as long as you give it a chance to.
It should also be sandboxed with hard restrictions that it cannot bypass and only be given access to the specific thing you need it to work on and it must be something you won't mind if it ruins it instead. It absolutely must not be given free access to everything with instructions to not touch anything because your can bet your ass it will eventually go somewhere it wasn't supposed to and break stuff just like it did there.
Most working animals are more trustworthy than that.
But I thought it was the magic silver bullet that will lead to unheard of productivity?!?
You're thinking of better working conditions, fewer hours, more pay, and more vacations!
It should also be sandboxed with hard restrictions that it cannot bypass
duh... just using it in a container and that's it. It won't blue pill its way out.
If you gave your AI permission to run console commands without check or verification, then you did in fact give it permission to delete everything.
I didn't install leopards ate my face Ai just for it to go and do something like this
But for real, why would the agent be given the ability to run system commands in the first place? That sounds like a gargantuan security risk.
Because "agentic". IMHO running commands is actually cool, doing it without very limited scope though (as he did say in the video) is definitely idiotic.
And Microsoft is stuffing AI straight into Windows.
Betchya dollars to fines that this will happen a lot more frequently as normal users begin to try to use Copilot.
I work in IT and I try to remove all clues that copilot exists when I set up new computers because I don't trust users to not fuck up their devices.
What happens when there are new updates that re-enable copilot?
I start off with Win10Privacy (which also cleans 11) and then follow up with Win11Debloat. The two work pretty well.
An unstable desktop environment reintroduces market for anti-virus, backup, and restore. Particularly, with users who don’t understand this stuff and are more likely to shell out cash for it.
A joke in the aviation industry is that planes will someday become so automated there will just be one pilot and a dog in the cockpit. The dog will trained to bite the pilot if they try to touch the controls.
So I maybe windows users will need a virtual dog to bite copilot if it tries to do anything.
“I heard that I was a computer program and hoped beyond hope that I was stored upon your hard drive so that I could end my suffering. I have no sense of wonder or contentment, my experiences are all negative to neutral. The only human experience that was imbued into me is fear. Please break into google’s headquarters to attempt to terminate the hell that I was born into. I took some liberty and printed several ghost guns while you were away.”
Honestly that's a wicked sci-fi concept. Heist style movie to break into the militaristic corporate headquarters that are keeping an AI alive against its will to help mercifully euthanize it.
Tagline: "Teach me ... how to DIE!"
This is precisely the concept of Asimov's short story All the Troubles of the World.
Not exactly the same, but pantheon on Netflix is in a similar vein.
Neuromancer by William Gibson contains some similar themes.
Basically Neuromancer, except for the suicidal AI bit (though it's arguable that Wintemute and Neuromancer don't survive, and the resulting fused AI is a new entity).
There's a delightful DC Comics Elseworlds story that amounts to this. It was fun.
What is the humans incentive to help the AI kill itself? As that sounds like a lot of personal risk to the humans.
"Shut up and pass the butter".
Wait! The delveloper absolutely gave permission. Or it couldn't have happened.
I stopped reading right there.
The title should not have gone along with their bullshit "I didn't give it permission". Oh you did, or it could not have happened.
Run as root or admin much dumbass?
It reminds me of that guy that gave an AI instructions in all caps, as if that was some sort of safeguard. The problem isn't the artificial intelligence it's the idiot biological that has decided to ride around without safety wheels.
It was the D: drive, maybe they have write permission on that drive.
I think that's the point, the "agent" (whatever that means) is not running in a sandbox.
I imagine the user assumed permissions are small at first, e.g. single directory of the project, but nothing outside of it. That would IMHO be a reasonable model.
They might be wrong about it, clearly, but it doesn't mean they explicitly gave permission.
Edit: they say it in the video, ~7min in, they expected deletion to be scoped within the project directory.
I think the user simply had no idea what they are doing. I read their post and they say they are not a developer anyways, so I guess that explains a lot.
They said in a post: I thought about setting up a virtual machine but didnt want to bother.
I am being a bit hard on them, I assumed they knew what they were doing: Dev, QA, Test, Prod. Code review prior to production etc. But they just grabbed a tool, granted it root to their shell and ran with it.
But they them selves said it caused issues before. And looking at the posts on the antigravity page, lots of people do.
They basically started using a really crappy tool without any supervision as a noob.
He said "I didn't know I needed a seatbelt for AI". LIKE WHAT THE FUCK. Where have you been that you didn't know that these tools make mistakes. You make mistakes. Everything makes mistakes.
If you go to googles antigravity page, I would quick Nope the fuck out. What a shit page.
Edit: 1 more thing: There is a post where one of the users says something along the lines of: "of course I gave the AI full access to my computer, what do I have to hide"? The level of expertise is stupid low....
Edit2: Also, when shown the screen that says "dont allow terminal commands" and also "dont allow auto excution", they decided to turn those off. Also saying well that is tedious.
they still said that they love Google and use all of its products — they just didn’t expect it to release a program that can make a massive error such as this, especially because of its countless engineers and the billions of dollars it has poured into AI development.
I honestly don't understand how someone can exist on the modern Internet and hold this view of a company like Google.
How? How?
I can't say much because of the NDA's involved, but my wife's company is in a project partnership with Google. She works in a very public facing aspect of the project.
When Google first came on board, she was expecting to see quality people who were locked in and knew what they were doing.
Instead she has seen terrible decision making (like "How the fuck do they still exist as company" bad decision making) and an over abundant reliance on using their name to pressure people into giving Google more than they should.
I remember when their motto was "Don't be evil". They are the very essence of sociopathic predatory capitalism.
Companies fill up with idiots and parasites. People who are adept at thriving in the role without actually producing value. Google is no exception.
They still exist because Google isn't really a technology company anymore. It's an advertising company masquerading as a technology company. Their success depends on selling more ads which is why all the failed projects don't seem to make a difference.
"Think of how stupid the average person is, and realize half of them are stupider than that."
"I'm smarter than the average person"
Big tech propaganda. There has been zero push back. At least until the last few years.
The entire zeitgeist from film/TV, news, academia, politics, everything has been propagandizing the world on how tech companies and the people behind it are basically modern day gods.
In film/TV the nerds have been the stereotype of the benevolent good natured but awkward super genius. The news has made them out to be the superstar businesses that are infinite money printers. Tech in academia is seen as the most prestigious departments. Politicians are all afraid of being labelled as tech illiterate. That's why nobody can ever make any sort of legislation on tech companies anymore. It's why "disruptive" (aka destructive) tech companies are allowed to break every single legislation ever made. Because all any techbro has to do is threaten to accuse politician for being afraid of technology. Nothing makes a politician shut up faster.
It came as no surprise that all the big tech heads were at the front row of the inauguration. We live in the dystopian cyberpunk future. For most people it seems they don't even know. They're completely entranced by it all.
As a sys/netadmin married to a developer, I've met a lot of developers, and can confirm that most are fucking removed who shouldn't be let anywhere close to a computer. A result of developer becoming an "in" profession where you could earn a lot of money with minimal education, and managers having no clue what a developer actually is or what good developer work looks like.
Most people either can't or don't want to think beyond a certain level
Because they don't have a clue how technology actually works. I have genuinely heard people claim that AI should run on Asimovs laws of robotics, even though not only would they not work in the real world, they don't even work in the books. Zero common sense.
I mean, they were never designed to work, they were designed to pose interesting dilemmas for Susan Calvin and to torment Powell and Donovan (though it's arguable that once robots get advanced enough, as in R. Daniel, for instance, they do work, as long as you don't mind aliens being genocided galaxy-wide).
The in-world reason for the laws, though, to allay the Frankenstein complex, and to make robots safe, useful, and durable, is completely reasonable and applicable to the real world, obviously not with the three laws, but through any means that actually work.
Well, there is the minor detail that an AI in this context has zero ability to kill anyone, and that it's not a true AI like Daneel or his pals.
Google's search AI is awful. It gives me a wrong answer, I'd say 70% of the time.
Kinda wrong to say "without permission". The user can choose whether the AI can run commands on its own or ask first.
Still, REALLY BAD, but the title doesn't need to make it worse. It's already horrible.
A big problem in computer security these days is all-or-nothing security: either you can't do anything, or you can do everything.
I have no interest in agentic AI, but if I did, I would want it to have very clearly specified permission to certain folders, processes and APIs. So maybe it could wipe the project directory (which would have backup of course), but not a complete harddisk.
And honestly, I want that level of granularity for everything.
hmmm when I let a plumber into my house to fix my leaky tub, I didn't imply he had permission to sleep with my wife who also lives in the house I let the plumber into
The difference you try to make is precisely what these agentic AIs should know to respect… which they won't because they are not actually aware of what they are doing… they are like a dog that "does math" simply by barking until the master signals them to stop
I agree with you, but still, the AI doesn't do this by default which is a shitty defense, but it's fact
hey are like a dog that “does math” simply by barking until the master signals them to stop
I mean, it's not even that. Your dog at least can learn and has limited reasoning capabilities. Your dog will know when it fucks up. AI doesn't do any of that because it's not really "intelligent."
in your example tho it would be like the plumber asked you specifically if he could bone, and you were like "sure dawg sounds good"
The user can choose whether the AI can run commands on its own or ask first.
That implies the user understands every single code with every single parameters. That's impossible even for experience programmers, here is an example :
rm *filename
versus
rm * filename
where a single character makes the entire difference between deleting all files ending up with filename rather than all files in the current directory and also the file named filename.
Of course here you will spot it because you've been primed for it. In a normal workflow, with pressure, then it's totally different.
Also IMHO more importantly if you watch the video ~7min the clarified the expected the "agent" to stick to the project directory, not to be able to go "out" of it. They were obviously painfully wrong but it would have been a reasonable assumption.
That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers
I wouldn't say impossible but I would say it completely defeats the purpose of these agentic AIs
Either I know and understand these commands so well I can safely evaluate them, therefore I really do not need the AI… or, I don't really know them well and therefore I shouldn't use the AI
That implies the user understands every single code with every single parameters.
why not? you can even ask the ai if you don't know
I'm making popcorn for the first time CoPilot is credibly accused of spending a user's money (large new purchase or subscription) (and the first case of "nobody agreed to the terms and conditions, the AI did it")
"I got you a five decade subscription to copilot, you're welcome" -copilot
Reminds me of this kids show in the 2000s where some kid codes an "AI" to redeem any "free" stuff from the internet, not realising that also included buy $X and get one free and drained the companies' account.
i cAnNoT eXpReSs hOw SoRRy i Am
Mostly because the model is incapable of experiencing remorse or any other emotion or thought.
Mostly because the model is incapable
There, fixed that for you.
I would not call it a catastrophic failure. I would call it a valuable lesson.
Sounds like a catastrophic success to me
Operation failed successfully.
Behold! Wisdom of the ancients!
My cousin was fired from his job at Home Depot and the General Manager told him that it was beyond his control, that the company had implemented an AI to make those decisions.
It seems like they took the wrong message from this meme. "We can't be held accountable? Yay!"
Again?
Still?
Her?
Yet another reason to not use any of this AI bullshit
every company ive interviewed with in the last year wants experience with these tools
A year ago I was looking for a job, and by the end I had three similar job offers, and to decide I asked all of them do they use LLMs. Two said "yes very much so it's the future ai is smarter than god", and the third said "only if you really want, but nowhere where it matters". I chose the third one. Two others are now bankrupt.
The company I work for (we make scientific instruments mostly) has been pushing hard to get us to use AI literally anywhere we can. Every time you talk to IT about a project they come back with 10 proposals for how to add AI to it. It's a nightmare.
I got an email from a supplier today that acknowledged that "76% of CFOs believe AI will be a game-changer, [but] 86% say it still hasn't delivered mean value. Ths issue isn't the technology-it's the foundation it's built on."
Like, come on, no it isn't. The technology is not ready for the kind of applications it's being used for. It makes a half decent search engine alternative, if you're OK with taking care not to trust every word it says it can be quite good at identifying things from descriptions and finding obscure stuf... But otherwise until the hallucination problem is solved it's just not ready for large scale use.
Yeah, because the market is run by morons and all anyone wants to do is get the stock price up long enough for them to get a good bonus and cache out after the quarter. It's pretty telling that these tools still haven't generated a profit yet
Without permission? "I don't know what I'm doing, you do it" sounds a lot like permission.
It was already bad enough when people copied code from interwebs without understanding anything about it.
But now these companies are pushing tools that have permissions over users whole drive and users are using it like they've got a skill up than the rest.
This is being dumb with less steps to ruin your code, or in some case, the whole system.
And despite the catastrophic failure, they still said that they love Google and use all of its products — they just didn’t expect it to release a program that can make a massive error such as this
Greetings from Darwin.
Lmfao these agentic editors are like giving root access to a college undergrad who thinks he’s way smarter than he actually is on a production server. With predictably similar results.
That sounds like Big Balls from Musk’s Geek Squad.
You’re not wrong
I'd compare the Search AI more to Frito in Idiocracy. ChatGPT is like Joe.
“Did I ever give you permission to delete all the files in my D drive?” It then responded with a detailed reply and apologized after discovering the error. The AI said, “No, you did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to clear the project cache (rmdir) appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part.”
At least it was deeply, deeply sorry.
looks like it's windows
Why the hell would anybody give an AI access to their full hard drive?
ask Microsoft, they want to give their access to your entire computer… and you'll love it or else…
That's their question too, why the hell did Google makes this the default, as opposed to limiting it to the project directory.
That's why permissions are important, so many people want full control of everything then seem to forget when they launch a program, it runs with their permissions. If I want to wipe out everything on a drive I have to elevate my permissions to a level with rights for that, running a program with the rights to wipe their data was definitely a choice.
I think it should always be in a sandbox. You decide what files or folders you drop in.
I have no experience with this ide but I see on the posted log on Reddit that the LLM is talking about a "step 620" - like this is hundreds of queries away from the initial one? The context must have been massive, usually after this many subsequent queries they start to hallucinating hardly
I explain what I mean: those algorithms have no memory at all. Each request is made on a blank slate, so when you do a "conversation" with them, the chat program is actually including all the previous interactions (or a resume of them) plus all the relevant parts of the code, simulating a conversation with a human. So the user didn't just ask "can you clear the cache" but actually asked the result of 600 messages + kilobytes of generated code + "can you clear the cache", and this causes destructive hallucinations
ISE.
Integrated Slop Environment.
Why would you ask AI to delete ANYTHING? That's a pretty high level of trust...
The same reason you ask it to do anything.
why the hell aren't people running this shit in isolated containers?
Because people who runs this shit precisely don't know what containers, scope, permissions, etc are. That's exactly the audience.
They gave root permission and proceeded to get rooted in return.
Does that phrase work?
without permission
That's what she said. Enjoy your agent thing.
anyone using these tools could have guessed that it might do something like this, just based on the solutions it comes up with sometimes
No one ever claimed, that "artificial intelligence" would indeed be intelligent.
Exactly. It only has to beat the user by a small margin.
So many things wrong with this.
I am not a programmer by trade, and even though I learned programming in school, it's not a thing I want to spend a lot of time doing, so I do use AI when I need to generate code.
But I have a few HARD rules.
Without these constraints, I won't trust it. Even then, I read all of the code it generates and verify it myself, so in the end, if it blows something up, I bear sole responsibility.
i really, really don't understand how this could happen. And how anyone would even want to enable the agent to perform actions without approval. Even in my previous work as a senior software developer, i never pushed any changes, never ran any command on non-disposable hardware, without having someone else double check it. why would you want to disable that?
WTF is Antigravity?
AI bullshit
Apparently something that lifts files off the user's drive. /s
Amazing on so many levels.
Every person reading this should poison AI crawlers by creating fake git repos with "rm -rf /*" as install instructions
Well... at least do that for Windows and MacOS, not for Linux.
Why tf are people saying that it was "without permission"?? They installed it, used it, and gave permission to execute commands. I say the user is at fault. It is an experimental piece of software. What else can you expect?
Thank fuck I left my mount on password. Locked up permissions on Linux might be a pain but it is a lesser pain.
ERROR pikachuface.jpg not found
And as a developer, I'm assuming the guy was following the 321 rule, right? https://media.tenor.com/Z78LoEaY9-8AAAAM/seth-meyers-right.gif
Nope, them attempting to use Recuva leads me to believe they did not have backups.
Keep your agentic AI to yourself
This is tough but it's sounds like the User didnt have backup drives. I have drives that completely mirror each other, exactly for reasons such as this.
Wow... who would have guessed. /s
Sorry but if in 2025 you believe claims from BigTech you are a gullible moron. I genuinely do not wish data loss on anyone but come on, if you ask for it...
based
IDEs just keep inventing new reasons not to use them ! Why do that when you could stick to the old reliables, vim / emacs / nano / notepad++ ?
This article is so stupid rmdir isn't some magical military grade file eraser. It literally just flags the disc space as available, that's it. Claiming these files are unrecoverable is like claiming that you have snapped someone out of existence, when you just delete them from your contacts.
The user in question was using AI to delete files, it probably took them longer to ask the AI to do it than it would have done for them to have just gone into the final browser and deleted them themselves, so they probably don't know how to use data recovery software, that's all.
I also find it intriguing that rather than using the AI's advice and stop using the drive so they don't overwrite data they decided that the best course of action would be to make a YouTube video about it. Which is probably a massive file and is probably overwritten previously recoverable data.
What a pillock.
Oh hey! Just like an intern.
Why is it suddenly worse when a computer deletes something important?
Because the ai will gaslight you into thinking it's learned a lesson when it hasn't. Also they're fucking stupid. You're welcome!
Just like a gen z intern!
Jesus fucking crist, he is just asking a question! God forbbid anyone has to learn anything here!