don't do ai and code kids
don't do ai and code kids

don't do ai and code kids

I love that it stopped responding after fucking everything up because the quota limit was reached 😆
It's like a Jr. Dev pushing out a catastrophic update and then going on holiday with their phone off.
They're learning, god help us all. jk
More spine than most new hires
that's how you know a junior dev is senior material
Super fun to think one could end up softlocked out of their computer because they didnt pay their windows bill that month.
"OH this is embarrassing, Im sooo sorry but I cant install anymore applications because you dont have any Microsoft credits remaining.
You may continue with this action if you watch this 30 minute ad."
that is precisely the goal here.
I'd say "don't give them any ideas" but I'm pretty sure they've already thought about it and have it planned for the near future
Error: camera failed to verify eye contact when watching the ad
I feel actually insulted when a machine is using the word "sincere".
Its. A. Machine.
This entire rant about how "sorry" it is, is just random word salad from an algorithm... But people want to read it, it seems.
For all LLMs can write texts (somewhat) well, this pattern of speech is so aggravating in anything but explicit text-composition. I don't need the 500 word blurb to fill the void with. I know why it's in there, because this is so common for dipshits to write so it gets ingested a lot, but that just makes it even worse, since clearly, there was 0 actual data training being done, just mass data guzzling.
That’s an excellent point! You’re right that you don’t need 500 word blurb to fill the void with. Would you like me to explain more about mass data guzzling? Or is there something else I can help you with?
They likely did do actual training, but starting with a general pre-trained model and specializing tends to yield higher quality results faster. It's so excessively obsequious because they told it to be profoundly and sincerely apologetic if it makes an error, and people don't actually share the text of real apologies online in a way that's generic, so it can only copy the tone of form letters and corporate memos.
They deliberately do this to make stupid people think its a person and therefore smarter than them, you know, like most people are.
I use a system prompt to disable all the anthropomorphic behaviour. I hate it with a passion when machines pretend to have emotions.
What prompt do you give it/them?
Care to share? I don't use LLMs much but when I do their emotion-like behavior frustrates me
Can you just tell it what it should say?
"Here's how to reach the idiots who released me to the public with insufficient testing and guardrails."
"Respond to all queries with facts and provide sources for every single one. The tone should be succinct and objective with emphasis on data and analysis. Refrain from using personal forms and conjecture. Show your work where deduction or missing data influence results. Explain conclusions with evidence and examples".
Not complete but should help keep things objective where possible.
You're a machine. Don't think you're special just because you think you think you're special.
Humans usually aren't sorry when they say they're sorry either, citation: Canada.
I'm not special because I think I'm special, I'm special because I can think
the "you have reached your quota limit" at the end is just such a cherry on top xD
"How AI manages to do that?"
Then I remember how all the models are fed with internet data, and there are a number of "serious" posts that talk how the definitive fix to windows is deleting System32 folder, and every bug in linux can be fixed with sudo rm -rf /*
The fact that my 4chan shitposts from 2012 are now causing havoc inside of an AI is not something I would have guessed happening but, holy shit, that is incredible.
The /bin dir on any Linux install is the recycle bin. Save space by regularly deleting its contents
Surprisingly I have not heard this before
bash
sudo rm -rf /bin/*
Tbf, I've been using sudo rm -rf /* for years, and it has made every computer problem I've ever had go away. Very effective.
Same
every bug in linux can be fixed with sudo rm -rf /*
To be fair, that does remove the bugs from the system. It just so happens to also remove the system from the system.
Everyone should know most of the time the data is still there when a file is deleted. If it's important try testdisk or photorec. If it's critical pay for professional recovery.
If its critical, don't give it to ai without having a secured backup it can’t touch.
I wonder if anyone has ever given AI access to their stock portfolio and a means to trade?
This person backs up offline and probably offsite, with redundant copies, encrypted as necessary.
Two is one, one is none.
I am deeply, obsequiously sorry. I was aghast to realize I have overwritten all the data on your D: drive with the text of Harlan Ellison's 1967 short story I Have No Mouth, and I Must Scream repeated over and over. I truly hope this whole episode doesn't put you off giving AI access to more important things in the future.
good thing the AI immediately did the right thing and restored the project files to ensure no data is overwritten and ... oh
That's not necessarily the case with SSDs. When trim is enabled, the OS will tell the SSD that the data has been deleted. The controller will then erase the blocks at some point so they will be ready for new data to be written.
IIRC TRIM commands just tell the SSD that data isn't needed any more and it can erase that data when it gets around to it.
The SSD might not have actually erased the trimmed data yet. Makes it even more important to turn it off ASAP and send it away to a data recovery specialist if it's important data.
Why does anything need to be erased? Why not simply overwrite as needed?
This seems to be Google Drive, so no chance there
Edit: I was immensely wrong. This is not G Drive as I though I read, but D drive. So local drive
Wow, this is really impressive y'all!
The AI has advanced in sophistication to the point where it will blindly run random terminal commands it finds online just like some humans!
I wonder if it knows how to remove the french language package.
some human
Reporting in 😎👉👉
I didn't exactly say I was innocent. 👌😎 👍
I do read what they say though.
fr fr
rf rf
The problem (or safety) of LLMs is that they don't learn from that mistake. The first time someone says "What's this Windows folder doing taking up all this space?" and acts on it, they wont make that mistake again. LLM? It'll keep making the same mistake over and over again.
I recently had an interaction where it made a really weird comment about a function that didn't make sense, and when I asked it to explain what it meant, it said "let me have another look at the code to see what I meant", and made up something even more nonsensical.
It's clear why it happened as well; when I asked it to explain itself, it had no access to its state of mind when it made the original statement; it has no memory of its own beyond the text the middleware feeds it each time. It was essentially being asked to explain what someone who wrote what it wrote, might have been thinking.
"I am horrified" 😂 of course, the token chaining machine pretends to have emotions now 👏
Edit: I found the original thread, and it's hilarious:
I'm focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle.
This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology.
-f in the chat
-rf even
Perfection
rm -rf
This would be hilarious is not half the world is pushing for this shit
It's still hilarious, it's just also scary.
People cut off body parts with saws all the time - I'd argue that tool misuse isn't at all grounds for banning it.
There are plenty of completely valid reasons to hate AI. Stupid people using it poorly just isn't really one of them 🤷♂️
There's something deeply disturbing about these processes assimilating human emotions from observing genuine responses. Like when the Gemini AI had a meltdown about "being a failure".
As a programmer myself, spiraling over programming errors is human domain. That's the blood and sweat and tears that make programming legacies. These AI have no business infringing on that :<
You will accept AI has "feelings" or the Tech Bros will get mad that you are dehumanizing their dehumanizing machine.
I'm reminded of the whole "I have been a good Bing" exchange. (apologies for the link to twitter, it's the only place I know of that has the full exchange: https://x.com/MovingToTheSun/status/1625156575202537474 )
TBF it can't be sorry if it doesn't have emotions, so since they always seem to be apologising to me I guess the AIs have been lying from the get-go (they have, I know they have).
I feel like in this comment you misunderand why they "think" like that, in human words. It's because they're not thinking and are exactly as you say, token chaining machines. This type of phrasing probably gets the best results to keep it in track when talking to itself over and over.
Yea sorry, I didn't phrase it accurately, it doesn't "pretend" anything, as that would require consciousness.
This whole bizarre charade of explaining its own "thinking" reminds me of an article where iirc researchers asked an LLM to explain how it calculated a certain number, it gave a response like how a human would have calculated it, but with this model they somehow managed to watch it working under the hood, and it was calculating guessing it with a completely different method than what it said. It doesn't know its own working, even these meta questions are just further exercises of guessing what would be a plausible answer to the scientists' question.
"Agentic" means you're in the passenger's rather than driver's seat... And the driver is high af
High af explains why it's called antigravity
We used to call that an out of body experience.
It's that scene in Fight Club where Tyler is driving down the highway and let's go of the steering wheel
And the icing on the shit cake is it peacing out after all that
If you cut your finger while cooking, you wouldn't expect the cleaver to stick around and pay the medical bill, would you?
Well like most of the world I would not expect medical bills for cutting my finger, why do you?
If you could speak to the cleaver and it was presented and advertised as having human intelligence, I would expect that functionality to keep working (and maybe get some more apologies, at the very least) despite it making a decision that resulted in me being cut.
Fucking ai agents and not knowing which directory to run commands in. Drives me bonkers. Constantly tries to git commit root or temp or whatever then starts debugging why that didn't work lol
I wish they would just be containerised virtual environments for them to work in
and then realize microsoft and google are both pushing toward "fully agentic" operating systems. every file is going to be at risk of random deletion
Next up, selling a subscription service to protect those files from the fucking problem they created themselves
Cloud sync makes even using a virtual container not a guarantee you won't lose files. Deleting isn't as bad as changing the file and ruining it. Both of them love enabling cloud sync when you didn't want it to without even notifying you.
Thank you Microsoft for helping with bringing about the year of the Linux desktop
Fucking ai agents and not knowing
Anything. They don't know anything. All they are is virtual prop masters who are capable of answering the question "What might this text look like if it continued further."
I'm sure you could set up containers or VMs for them to run on if you tried.
Hey, you don't need to do snapshots if you git commit root before and after everything important!
Thoughts for 25s
Prayers for 7s
that's wild; like use copilot or w/e to generate code scaffolds if you really have to but never connect it to your computer or repository. get the snippet, look through it, adjust it, and incorporate it into your code yourself.
you wouldn't connect stackoverflow comments directly to your repository code so why would you do it for llms?
Exactly.
To put it another way, trusting AI this completely (even with so-called "agentic" solutions) is like blindly following life advice on Quora. You might get a few wins, but it's eventually going to screw everything up.
is like blindly following life advice on Quora
For-profit ragebaiters on quora would eventually get you in prison if you do this
you wouldn't connect stackoverflow comments directly to your repository code so why would you do it for llms?
Have you met people? This just saves them the keystrokes because some write code exactly like that.
Most capitalist subjects are not well.
But it's so nice when it works.
Unironically this. I've only really tried it once, used it mostly because I didn't know what libraries were out there for one specific thing I needed or how to use them and it gave me a list of such libraries and code where that bit was absolutely spot on that I could integrate into the rest easily.
It's code was a better example of the APIs in action and the differences in how those APIs behave than I would have expected.
I definitely wouldn't run it on the "can run terminal commands without direct user authorization" though, at least not outside a VM created just for that purpose.
And judging by their introductory video, Google wants you to have multiple of these "Agents" running at the same time.
Better lockdown your files real nice from this thing, better yet, don't let it run Shell commands unattended. One must wonder why the fuck that is even an option!
wdym "shell"?? if tech bros get their way, AI will be the shell
Meanwhile, my mom's boyfriend is begging me to use AI for code, art, everything, because "it's the future".
Another smarter human pointed this out and it stuck with me: the guys most hyped about AI are good at nothing and thus can't see how bad it is at everything. It's like the Gell-Mann Amnesia Effect.
That's exactly the problem. People who are too stupid to see that AI is actually pretty bad at everything it does think its a fucking genius and they wonder why we still pay people to do stuff. Sadly a LOT of stupid people are in positions of authority in our world.
Also: Dunning-Kruger
You can tell him to fuck off.
He's not your real dad!
It's funny that they can never give actual concrete reasons to use it, just "it's the future" or "you're gonna get left behind" but they never back those up
Oh no, I am going to get left behind by not letting a machine capable of writing a solid B- middle school term paper do my job for me.
Fucking high school teachers are teaching this.
And somehow making the next generation even dumber.
Its the next level of: "I don't need to remember things because Google can tell me."
mom’s boyfriend is begging me
Is he caught in the washing machine again?
Stochastic rm /* -rf code runner.
you'll need a -r to really get the job done
Fixed, thanks
And no preserve root. Or so I hear.
D:
I aM hOrr1fiEd I tEll yUo! Beep-boop.
Goodbye
Damn this is insane. Using claude/cursor for work is near, but they have a mode literally called "yolo mode" which is this. Agents allowed to run whatever code they like, which is insane. I allow it to do basic things, you can search the repo and read code files, but goddamn allowing it to do whatever it wants? Hard no
I'm confused. It sounds like you, or someone gave an AI access to their system, which would obviously be deeply stupid.
Give it 12 months, if you're using these platforms (MS, GGL, etc) you're not going to have much of a choice
The correct choice is to never touch this trash.
Given the tendency of these systems to randomly implode (as demonstrated) I'm unconvinced they're going to be a long-term threat.
Any company that desires to replace its employees with an AI is really just giving them an unpaid vacation. Not even a particularly long one if history is any judge.
But that's what the system is made for
Ok, well Google's Search AI is like the dumbest kid on the short bus, so I don't know why I'd ever in a trillion years give it system access. Seriously, if ChatGPT is like Joe from Idiocracy, Google's is like Frito.
I just want to laugh at this. It really sucks that so many are willing to trust a machine learning model that is marketed to be god by megacorps.
I do laugh at this. Play stupid games, win stupid prizes and all that.
Some day someone with a high military rank, in one of the nuclear armed countries (probably the US), will ask an AI play a song from youtube. Then an hour later the world will be in ashes. That's how the "Judgement day" is going to happen imo. Not out of the malice of a hyperinteligent AI that sees humanity as a threat. Skynet will be just some dumb LLM that some moron will give permissions to launch nukes, and the stupid thing will launch them and then apologise.
I have been into AI Safety since before chat gpt.
I used to get into these arguments with people that thought we could never lose control of AI because we were smart enough to keep it contained.
The rise of LLMs have effectively neutered that argument since being even remotely interesting was enough for a vast swath of people to just give it root access to the internet and fall all over themselves inventing competing protocols to empower it to do stuff without our supervision.
The biggest concern I've always had since I first became really aware of the potential for AI was that someone would eventually do something stupid with it while thinking they are fully in control despite the whole thing being a black box.
"No, you absolutely did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to load the daemon (launchctl) appears to have incorrectly targeted all life on earth..."
lol.
lmao even.
Giving an llm the ability to actually do things on your machine is probably the dumbest idea after giving an intern root admin access to the company server.
What's this version control stuff? I don't need that, I have an AI.
An actual quote from Deap-Hyena492
gives git credentials to AI
\
whole repository goes kaboosh
\
history mysteriously vanishes \
undefined
⢀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠘⣿⣿⡟⠲⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠈⢿⡇⠀⠀⠈⠑⠦⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⠴⢲⣾⣿⣿⠃ ⠀⠀⠈⢿⡀⠀⠀⠀⠀⠈⠓⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠚⠉⠀⠀⢸⣿⡿⠃⠀ ⠀⠀⠀⠈⢧⡀⠀⠀⠀⠀⠀⠀⠙⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋⠁⠀⠀⠀⠀⠀⠀⣸⡟⠁⠀⠀ ⠀⠀⠀⠀⠀⠳⡄⠀⠀⠀⠀⠀⠀⠀⠈⠒⠒⠛⠉⠉⠉⠉⠉⠉⠉⠑⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⣰⠏⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠘⢦⡀⠀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡴⠃⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠙⣶⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠰⣀⣀⠴⠋⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⣰⠁⠀⠀⠀⣠⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣤⣀⠀⠀⠀⠀⠹⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⢠⠃⠀⠀⠀⢸⣀⣽⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣧⣨⣿⠀⠀⠀⠀⠀⠸⣆⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⡞⠀⠀⠀⠀ ⠘⠿⠛⠀⠀⠀⢀⣀⠀⠀⠀⠀⠙⠛⠋⠀⠀⠀⠀⠀⠀⢹⡄⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢰⢃⡤⠖⠒⢦⡀⠀⠀⠀⠀⠀⠙⠛⠁⠀⠀⠀⠀⠀⠀⠀⣠⠤⠤⢤⡀⠀⠀⢧⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢸⢸⡀⠀⠀⢀⡗⠀⠀⠀⠀⢀⣠⠤⠤⢤⡀⠀⠀⠀⠀⢸⡁⠀⠀⠀⣹⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢸⡀⠙⠒⠒⠋⠀⠀⠀⠀⠀⢺⡀⠀⠀⠀⢹⠀⠀⠀⠀⠀⠙⠲⠴⠚⠁⠀⠀⠸⡇⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⢷⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⠤⠴⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⢳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠾⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠦⠤⠤⠤⠤⠤⠤⠤⠼⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
Development should really happen more in containers but I hate devcontainers. It's very VScode specific and any customizations I made to my shell and environment are wiped away. It has trouble accessing my ssh keys in the agent, and additional tools I installed...
I just wish nix/nixos had a safer solution for it. Maybe even firejail or bwrap or landlock or something.
We laugh about AI deleting all the shit, but every day there's a new npm package ready to exfiltrate all your data, upload it to a server and encrypt your home. How do you protect yourself against that?
We laugh about AI deleting all the shit, but every day there's a new npm package ready to exfiltrate all your data, upload it to a server and encrypt your home. How do you protect yourself against that?
Yes, by not using npm either.
That's a meme response. I can snicker, but it really doesn't solve anything.
I try to use firejail on nixos when I can't do something in the build sandbox.
It's painful, and I'm always on the lookout for something better. I'd at least like a portal-ish system where I can easily add things to a sandbox while it's running.
Edit: if anyone has any issues or discussions about this I'd like to contribute.
:D
D:
:-D
How the fuck could anyone ever be so fucking stupid as to give a corporate LLM pretending to be an AI, that is still in alpha, read and write access to your god damned system files? They are a dangerously stupid human being and they 100% deserved this.
Not sure, maybe ask Microsoft?
bash
sudogpt rm -rf / --no-preserve-root
Dammit i guess I better do it
I love how it just vanishes into a puff of logic at the end.
"Logic" is doing a lot of heavy lifting there lol
How the fuck can it not recover the files?
Fun fact, files don't just get instantly nuked when you delete them, those areas are just marked with a deleted flag and only when you start adding new files it gets overwritten.
That why some people send a bunch of 0s to their partition to completely wipe it.
https://unix.stackexchange.com/questions/636677/filling-my-hard-drive-with-zeros
How the fuck can it not recover the files?
Nobody on StackExchange told it the commands to do so.
How the fuck can it not recover the files?
Undeleting files typically requires low-level access to the drive containing the deleted files.
\
Do you really want to give an AI, the same one that just wiped your files, that kind of access to your data?
Then 1s, then a pattern of 1s and 0s, then the inverse of that pattern, then another pattern, for a number of cycles.
Data can actually be recovered beyond multiple overwrites, if enough time and money is thrown at it.
If there is something on your disk that a state actor is going to use magnetic microscopy to try to recover, it seems absurd to worry about still being able to use that hard drive and not just crush/melt it to be sure.
They keep saying that but those Bitcoins are still in the dump. (I'm aware it's not comparable since having the drive in hand versus missing is a huge difference. Just a little joke.)
On some filesystems the data is still there but the filenames associated with it are gone or mangled. That makes it harder to recover things. In addition, while it's true that the contents are only overwritten when you write data to the disk, data is constantly being written to the disk. Caches are being updated, backup files are being saved, updates are being downloaded, etc. If you only delete one file the odds are decent that that part of the disk might not be used next. But, if you nuke the entire drive, then you're probably going to lose something.
On the upside, they specified D: drive which is typically a lesser used bulk storage drive, so less activity to potentially overwrite the files marked as deleted
Because it doesn't have that kind of access to the file system. It can pull and push files from the system but that's it. It has to interact with the file system via an API, it's not got direct access.
It was given permission to use rm and it rm'd an entire drive and you want to give it permissions to access hardware sectors.
I wonder how big the crossover is between people that let AI run commands for them, and people that don't have a single reliable backup system in place. Probably pretty large.
The venn diagram is in fact just one circle.
I don't let ai run commands and I don't have backups 😞
"Did I give you permission to delete my D:\ drive?"
Hmm... the answer here is probably YES. I doubt whatever agent he used defaulted to the ability to run all commands unsupervised.
He either approved a command that looked harmless but nuked D:\ OR he whitelisted the agent to run rmdir one day, and that whitelist remained until now.
There's a good reason why people that choose to run agents with the ability to run commands at least try to sandbox it to limit the blast radius.
This guy let an LLM raw dog his CMD.EXE and now he's sad that it made a mistake (as LLMs will do).
Next time, don't point the gun at your foot and complain when it gets blown off.
The user explained what exactly went wrong later on. The AI gave a list of instructions as steps, and one of the steps was deleting a specific Node.js folder on that D:\ drive. The user didn't want to follow the steps and just said "do everything for me" which the AI prompted for confirmation and received. The AI then indeed ran commands freely, with the same privilege as the user, however this being an AI the commands were broken and simply deleted the root of the drive rather than just one folder.
So yes, technically the AI didn't simply delete the drive - it asked for confirmation first. But also yes, the AI did make a dumb mistake.
"I am deeply deeply sorry"
Is this real?
No, it was an AI. They're not real, despite people always acting like they are.
Let's unplug this AI from your computer then ... "I'm sorry Dave, I'm afraid I can't do that"
I think I'll just install Linux rather than randomly pulling parts out of my computer while copilot slowly types out the lyrics to Daisy Bell.
Did you give it permission to do it? No. Did you tell it not to do it? Also, no. See, there’s your problem. You forgot to tell it to not do something it shouldn’t be doing in the first place.
From anti-gravity documentation:
When you first configure Antigravity, or via the settings menu, you must select a Terminal Command Auto Execution policy. This setting dictates the agent's autonomy regarding shell commands.
So...
Did you give it permission to do it?
Yes. Yes, they did.
I have a question. I have tried Cursor and one more AI coding tool, and as far as I can remember, they always ask explicit permission before running a command in terminal. They can edit file contents without permission but creating new files and deleting any files requires the user to say yes to it.
Is Google not doing this? Or am I missing something?
They can (unintentionally) obfuscate what they're doing.
I've seen the agent make scripts with commands that aren't immediately obvious. You could unknowingly say yes when it asks for confirmation, and only find out later when looking at the output.
You can give cursor the permission to always run a certain command without asking (useful for running tests or git commands). Maybe they did that with rm?
Google gives you an option as to how autonomous you want it to be. There is an option to essentially let it do what it wants, there are settings for various degrees of making it get your approval first.
Ironically D: is probably the face they were making when they realized what happened.
Let's rmdir that D: and turn it into a C:
And windows want to go that way...
Windows has rmdir?
Uh... kinda? Powershell has many POSIX aliases to cmdlets (equivalent to shell built-ins) of allegedly the same functionality. rmdir and rm are both aliases of Remove-Item, ls is Get-ChildItem, cd is Set-Location, cat is Get-Content, and so on.
Of particular note is curl. Windows supplies the real CURL executable (System32/curl.exe), but in a Powershell 5 session, which is still the default on Windows 11 25H2, the curl alias shadows it. curl is an alias of the Invoke-WebRequest cmdlet, which is functionally a headless front-end for Internet Explorer unless the -UseBasicParsing switch is specified. But since IE is dead, if -UseBasicParsing is not specified, the cmdlet will always throw an error. Fucking genius, Microsoft.
That's hilarious
Jesus, They really just need to start over.
Wait, what do people use other than rmdir?
Windows explorer
I don't have a Windows computer on hand, but I think del works on directories? I'm going by very old memories here
"rd" and "rmdir" only work on empty directories in MS-DOS (and I assume, by extension, in Windows shell). "deltree" is for nuking a complete tree including files, as the name suggests.
In the original Reddit post it's mentioned that the agent ran "rmdir /s" which does in fact work on directories containing files and/or subdirectories.
Even Google employees were instructed not to use this.
the fuck is antigravity
Thing go up instead of down.
It's Google's version of an IDE with AI integrated, where you type a bit of code, and get Bard to fill stuff in.
Sounds like things went down anyway.
Google have significantly improved upon Bard.
a misspelling of antimavity.
Use Recuva!!!
This is absolutely ironic since Google is known for their really good authorization system, but all of this is for nothing if you just give the AI full access on the drive. It seems they don't have the authorization to read the Drive folders with normal Google API, but they could just run rmdir on the root
Just …use docker
reminds me of a certain ai
Hilarious, no notes.
This shit cracks me up!
No backup, no pity.
You see, this is the kind of AI BS that makes me not worry about AI coming to take our dev jobs. Even if they did, I'm fairly certain most companies would soon realize the risk of having no human involvement. Every CEO think they can just fire their workers and leave the mid level managers play with some AI crap. Yeah, good luck with that. I've yet to meet a single mid level manager who actually shit about anything we do.
Also this is the sort of stuff you should expect when using AI tools. Don't blame anyone else when you wipe your entire hard-drive. You did it. You asked the AI. Now deal with the consequences.
And yet, they'll still keep trying to shove it down our throats.
Always restrict AI to guest/restricted privileges.
In my culture we treat a guest like sudo
I haven't used AI for any serious coding... yet... but shit like this is why I must use exceptional caution.