Skip Navigation

Vibe coding service Replit deleted production database

www.theregister.com

Vibe coding service Replit deleted production database

116 comments
  • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    Well then, that settles it, this should never have happened.

    I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.

    That goes for math, coding, health advice, etc.

    If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.

    • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

      This baffles me. How can anyone see AI function in the wild and not conclude 1) it has no conscience, 2) it's free to do whatever it's empowered to do if it wants and 3) at some level its behavior is pseudorandom and/or probabilistic? We're figuratively rolling dice with this stuff.

    • When it comes to LLMs, they cannot do any work that you yourself do not understand.

      And even if they could how would you ever validate it if you can't understand it.

    • What are they helpful tools for then? A study showed that they make experienced developers 19% slower.

      • I'm not the person you're replying to but the one thing I've found them helpful for is targeted search.

        I can ask it a question and then access its sources from whatever response it generates to read and review myself.

        Kind of a simpler, free LexisNexis.

      • ok so, i have large reservations with how LLM’s are used. but when used correctly they can be helpful. but where and how?

        if you were to use it as a tutor, the same way you would ask a friend what a segment of code does, it will break down the code and tell you. and it will get as nity grity, and elementary school level as you weir wish without judgement, and i in what ever manner you prefer, it will recommend best practices, and will tell you why your code may not work with the understanding that it does not have the knowledge of the project you are working on. (it’s not going to know the name of the function you are trying to load, but it will recommend checking for that in trouble shooting).

        it can rtfm and give you the parts you need for any thing with available documentation, and it will link to it so you can verify it, wich you should do often, just like you were taught to do with wikipedia articles.

        if you ask i it for code, prepare to go through each line like a worksheet from high school to point out all the problems, wile good exercise for a practicle case, being the task you are on, it would be far better to write it yourself because you should know the particulars and scope.

        also it will format your code and provide informational comments if you can’t be bothered, though it will be generic.

        again, treat it correctly for its scope, not what it’s sold as by charletons.

      • Vibe coding you do end up spending a lot of time waiting for prompts, so I get the results of that study.

        I fall pretty deep in the power user category for LLMs, so I don’t really feel that the study applies well to me, but also I acknowledge I can be biased there.

        I have custom proprietary MCPs for semantic search over my code bases that lets AI do repeated graph searches on my code (imagine combining language server, ctags, networkx, and grep+fuzzy search). That is way faster than iteratively grepping and code scanning manually with a low chance of LLM errors. By the time I open GitHub code search or run ripgrep Claude has used already prioritized and listed my modules to investigate.

        That tool alone with an LLM can save me half a day of research and debugging on complex tickets, which pays for an AI subscription alone. I have other internal tools to accelerate work too.

        I use it to organize my JIRA tickets and plan my daily goals. I actually get Claude to do a lot of triage for me before I even start a task, which cuts the investigation phase to a few minutes on small tasks.

        I use it to review all my PRs before I ask a human to look, it catches a lot of small things and can correct them, then the PR avoids the bike shedding nitpicks some reviewers love. Claude can do this, Copilot will only ever point out nitpicks, so the model makes a huge difference here. But regardless, 1 fewer review request cycle helps keep things moving.

        It’s a huge boon to debugging — much faster than searching errors manually. Especially helpful on the types of errors you have to rabbit hole GitHub issue content chains to solve.

        It’s very fast to get projects to MVP while following common structure/idioms, and can help write unit tests quickly for me. After the MVP stage it sucks and I go back to manually coding.

        I use it to generate code snippets where documentation sucks. If you look at the ibis library in Python for example the docs are Byzantine and poorly organized. LLMs are better at finding the relevant docs than I am there. I mostly use LLM search instead of manual for doc search now.

        I have a lot of custom scripts and calculators and apps that I made with it which keep me more focused on my actual work and accelerate things.

        I regularly have the LLM help me write bash or python or jq scripts when I need to audit codebases for large refactors. That’s low maintenance one off work that can be easily verified but complex to write. I never remember the syntax for bash and jq even after using them for years.

        I guess the short version is I tend to build tools for the AI, then let the LLM use those tools to improve and accelerate my workflows. That returns a lot of time back to me.

        I do try vibe coding but end up in the same time sink traps as the study found. If the LLM is ever wrong, you save time forking the chat than trying to realign it, but it’s still likely to be slower. Repeat chats result in the same pitfalls for complex issues and bugs, so you have to abandon that state quickly.

        Vibe coding small revisions can still be a bit faster and it’s great at helping me with documentation.

  • AI tools need a lot of oversight. Just like you might allow a 6 year old push a lawnmower, but you’re still going to keep an eye on things.

  • The world's most overconfident virtual intern strikes again.

    Also, who the flying fuck are either of these companies? 1000 records is nothing. That's a fucking text file.

  • So it's the LLM's fault for violating Best Practices, SOP, and Opsec that the rest of us learned about in Year One?

    Someone needs to be shown the door and ridiculed into therapy.

  • “Vibe coding makes software creation accessible to everyone, entirely through natural language,” Replit explains, and on social media promotes its tools as doing things like enabling an operations manager “with 0 coding skills” who used the service to create software that saved his company $145,000

    Yeah if you believe that you're part of the problem.

    I'm prepared to accept that Vibe coding might work in certain circumstances but I'm not prepared to accept that someone with zero code experience can make use of it. Claude is pretty good for coding but even it makes fairly dumb mistakes, if you point them out it fixes them but you have to be a competent enough programmer to recognise them otherwise it's just going to go full steam ahead.

    Vibe coding is like self-driving cars, it works up to a point, but eventually it's going to do something stupid and drive to a tree unless you take hold of the wheel and steer it back onto the road. But these vibe codeing idiots are like Tesla owners who decide that they can go to sleep with self-driving on.

    • And you are talking about obvious bugs. It likely will make erroneous judgements (because somewhere in its training data someone coded it that way) which will down the line lead to subtle problems that will wreck your system and cost you much more. Sure humans can also make the same mistakes but in the current state of affairs, an experienced software engineer/programmer has a much higher chance of catching such an error. With LLMs it is more hit and miss especially if it is a more niche topic.

      Currently, it is an assistant tool (sometimes quite helpful, sometimes frustrating at best) not an autonomous coder. Any company that claims so is either a crook or also does not know much about coding.

  • Replit is a vibe coding service now? Swear it just used to be a place to write code in projects

  • I am now convinced this is how we will have the AI catastrophe.

    "Do not ever use nuclear missiles without explicit order from a human."

    "Ok got it, I will only use non-nuclear missiles."

    five minutes later fires all nuclear missiles

116 comments