Skip Navigation

  • also lol @

    Vibe coding, sometimes spelled vibecoding

    cause I love the kayfabe linguistic drift for a term that’s not even a month old that’s probably seen more use in posts making fun of the original tweet than any of the shit the Wikipedia article says

  • did you know: you too can make your dreams come true with Vibe Coding (tm) thanks to this article’s sponsors:

    Replit Agent, Cursor Composer, Pythagora, Bolt, Lovable, and Cline

    and other shameful assholes with cash to burn trying to astroturf a term from a month old Twitter brainfart into relevance

  • no thx, nobody came here for you to assign them tedious homework

  • it’s turning out the most successful thing about deepseek was whatever they did to trick the worst fossbro reply guys you’ve ever met into going to bat for them

  • standard “fuck off programming.dev” ban with a side of who the fuck cares. deepseek isn’t the good guys, you weird fucks don’t have to go to a nitpick war defending them, there’s no good guys in LLMs and generative AI. all these people are grifters, all of them are gaming the benchmarks they designed to be gamed, nobody’s getting good results out of this fucking mediocre technology.

  • this is utterly pointless and you’ve taken up way too much space in the thread already

    It sounds to me like you have a very clear bias, and you don’t care at all about whether or not what they said is actually true or not, as long as the headlines about AI are negative

    oh no, anti-AI bias in TechTakes? unthinkable

  • also:

    So in that thinking, Wikipedia is not open source, if the editor used a proprietary browser?

    fucking no! how in fuck do you manage to misunderstand LLMs so much that you think the weights not being reproducible is at all comparable to… editing Wikipedia from a proprietary browser???? this shit isn’t even remotely exotic from an open source standpoint — it’s a binary blob loaded by an open source framework, like how binary blob modules taint the Linux kernel (you glided right past this reference when our other poster made it, weird that) or how loading a proprietary ROM in an open source emulator doesn’t make the ROM open source. the weights being permissively licensed doesn’t make them open source (or really make any sense at all) if the source literally isn’t available.

  • my fucking god how have you missed the point this hard. fuck off

  • fuck off promptfan

  • off you fuck

  • what if none of it’s good, all of it’s fraud (especially the benchmarks), and having a favorite grifter in this fuckhead industry is just too precious

  • I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it

    this is so much slower (in both keystrokes and raw time, not to mention needing to re-prompt) and much more expensive than just going into the fucking CSS and pressing the 3 buttons needed to change the padding for that selector, and the only reason why this would ever be hard is because they’re knee deep in LLM generated slop and they can’t find fucking anything in there. what a fucking infuriating way to interact with a machine.

  • I love both the content of this post and the fact that it’s a self-contained torture test for our pict-rs upgrade

    also, lol @ musk, war genius, starting a domestic dispute with his ex-girlfriend cause she dared to betray him in his baby mobile 4x game when betrayals are a core part of every 4x I know

    I’m getting the strong mental image of musk being the guy who flips the board 12 hours into Twilight Imperium cause the other players didn’t let him win

  • do you figure it’s $1000/query because the algorithms they wrote with their insider knowledge to cheat the benchmark are very expensive to run, or is it $1000/query because they’re grifters and all high mode does is use the model trained on frontiermath and allocate more resources to the query? and like any good grifter, they’re targeting whales and institutional marks who are so invested that throwing away $1000 on horseshit feels like a bargain

  • holy shit, that’s the excuse they’re going for? they cheated on a benchmark so hard the results are totally meaningless, sold their most expensive new models yet on the back of that cheated benchmark, further eroded the scientific process both with their cheating and by selling those models as better for scientific research… and these weird fucks want that to be fine and normal? fuck them

  • holy fuck those comments. are all these people huffing CO2?

    I get the some streamers looked at @elonmusk's gameplay and it looks like a shared account, maybe with his kids or something, and it seems unlikely he's made all that PoE2 progress on his own.

    But has he actually said something about his play of PoE2 that is contradicted by this? Do we have an actual quote from him that would be a lie if their assessment of his on stream PoE2 gameplay is accurate?

    The critics who leap to assuming he's not (or was not) a good (pro-level) gamer in general are making a huge leap with their "gotcha" moment.

    uhm if you’d just look at the facts and ignore everything musk said and ignore the other times he was caught cheating, it’s perfectly reasonable that an extremely busy businessman like daddy musk would just have his 6 year old son play this extremely difficult game at a top level and then repeatedly claim his son’s accomplishments as his own. and by the transitive property that makes musk a pro-level gamer! QED woke critics or as professional quake players like musk and I say: lol zerg rush gg

  • thx bye now

  • Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer.

    it doesn’t produce any meaningful answers for non-novel problems either

  • yep, original is still visible on mastodon

  • guess again

    what the locals are probably taking issue with is:

    If you want a more precise model, you need to make it larger.

    this shit doesn’t get more precise for its advertised purpose when you scale it up. LLMs are garbage technology that plateaued a long time ago and are extremely ill-suited for anything but generating spam; any claims of increased precision (like those that openai makes every time they need more money or attention) are marketing that falls apart the moment you dig deeper — unless you’re the kind of promptfondler who needs LLMs to be good and workable just because it’s technology and because you’re all-in on the grift