Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 24th August 2025

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

209 comments
  • Here's a blog post I found via HN:

    Physics Grifters: Eric Weinstein, Sabine Hossenfelder, and a Crisis of Credibility

    Author works on ML for DeepMind but doesn't seem to be an out and out promptfondler.

    • Oh, man, I have opinions about the people in this story. But for now I'll just comment on this bit:

      Note that before this incident, the Malaney-Weinstein work received little attention due to its limited significance and impact. Despite this, Weinstein has suggested that it is worthy of a Nobel prize and claimed (with the support of Brian Keating) that it is “the most deep insight in mathematical economics of the last 25-50 years”. In that same podcast episode, Weinstein also makes the incendiary claim that Juan Maldacena stole such ideas from him and his wife.

      The thing is, you can go and look up what Maldacena said about gauge theory and economics. He very obviously saw an article in the widely-read American Journal of Physics, which points back to prior work by K. N. Ilinski and others. And this thread goes back at least to a 1994 paper by Lane Hughston, i.e., years before Pia Malaney's PhD thesis. I've read both; Hughston's is more detailed and more clear.

    • Author works on ML for DeepMind but doesn’t seem to be an out and out promptfondler.

      Quote from this post:

      I found myself in a prolonged discussion with Mark Bishop, who was quite pessimistic about the capabilities of large language models. Drawing on his expertise in theory of mind, he adamantly claimed that LLMs do not understand anything – at least not according to a proper interpretation of the word “understand”. While Mark has clearly spent much more time thinking about this issue than I have, I found his remarks overly dismissive, and we did not see eye-to-eye.

      Based on this I'd say the author is LLM-pilled at least.

      However, a fruitful outcome of our discussion was his suggestion that I read John Searle’s original Chinese Room argument paper. Though I was familiar with the argument from its prominence in scientific and philosophical circles, I had never read the paper myself. I’m glad to have now done so, and I can report that it has profoundly influenced my thinking – but the details of that will be for another debate or blog post.

      Best case scenario is that the author comes around to the stochastic parrot model of LLMs.

      E: also from that post, rearranged slightly for readability here. (the [...]* parts are swapped in the original)

      My debate panel this year was a fiery one, a stark contrast to the tame one I had in 2023. I was joined by Jane Teller and Yanis Varoufakis to discuss the role of technology in autonomy and privacy. [[I was] the lone voice from a large tech company.]* I was interrupted by Yanis in my opening remarks, with claps from the audience raining down to reinforce his dissenting message. It was a largely tech-fearful gathering, with the other panelists and audience members concerned about the data harvesting performed by Big Tech and their ability to influence our decision-making. [...]* I was perpetually in defense mode and received none of the applause that the others did.

      So also author is tech-brained and not "tech-fearful".

  • From the r/vibecoding subreddit, which yes is a thing that exists: "What’s the point of vibe coding if I still have to pay a dev to fix it?"

    what’s the point of vibe coding if at the end of the day i still gotta pay a dev to look at the code anyway. sure it feels kinda cool while i’m typing, like i’m in some flow state or whatever, but when stuff breaks it’s just dead weight. i cant vibe my way through debugging, i cant ship anything that actually matters, and then i’m back to square one pulling out my wallet for someone who actually knows what they’re doing. makes me think vibe coding is just roleplay for guys who want to feel like hackers without doing the hard part. am i missing something here or is it really just useless once you step outside the fantasy

    (via)

    • Oh my god, they're showing signs of sentience.

  • BLOOMBERG BREAKING: Sam Altman promises that GPT-6 will generate Ghibli images with levels of piss yellow heretofore "unseen"

  • Oxford Economist in the NYT says that AI is going to kill cities if they don't prepare for change. (Original, paywalled)

    I feel like this is at most half the picture. The analogy to new manufacturing technologies in the 70s is apt in some ways, and the threat of this specific kind of economic disruption hollowing out entire communities is very real. But at the same time as orthodox economists so frequently do his analysis only hints at some of the political factors in the relevant decisions that are if anything more important than technological change alone.

    In particular, he only makes passing reference to the Detroit and Pittsburgh industrial centers being "sprawling, unionized compounds" (emphasis added). In doing so he briefly highlights how the changes that technology enabled served to disempower labor. Smaller and more distributed factories can't unionize as effectively, and that fragmentation empowers firms to reduce the wages and benefits of the positions they offer even as they hire people in the new areas. For a unionized auto worker in Detroit, even if they had replaced the old factories with new and more efficient ones the kind of job that they had previously worked that had allowed them to support themselves and their families at a certain quality of life was still gone.

    This fits into our AI skepticism rather neatly, because if the political dimension of disempowering labor is what matters then it becomes largely irrelevant whether LLM-based "AI" products and services can actually perform as advertised. Rather than being the central cause of this disruption it becomes the excuse, and so it just has to be good enough to create the narrative. It doesn't need to actually be able to write code like a junior developer in order to change the senior developer's job to focus on editing and correcting code-shaped blocks of tokens checked in by the hallucination machine. This also means that it's not going to "snap back" when the AI bubble pops because the impacts on labor will have already happened, any more than it was possible to bring back the same kinds of manufacturing jobs that built families in the postwar era once they had been displaced in the 70s and 80s.

  • So state-owned power company Vattenfall here in Sweden are gonna investigate building "small modular reactors" as a response to government's planned buildout of nuclear.

    Either Rolls-Royce or GE Vernova are in the running.

    Note that this is entirely dependent on the government guaranteeing a certain level of revenue ("risk sharing"), and of course that that level survives an eventual new government.

    • Interesting wondering if they manage to come further in the process than our gov, which seems to restart the process every few years, and then either discovers nobody wants to do it (it being building bigger reactors, not the smaller ones, which iirc from a post here are not likely two work out) for a reasonable price, or the gov falls again over their lies about foreigners and we restart the whole voting cycle again. (It is getting really crazy, our fused green/labour party is now being called the dumbest stuff by the big rightwing liberal party (who are not openly far right, just courting it a lot)).

      29 okt are our new elections. Lets see what the ratio between formation and actually ruling is going to be this time. (Last time it took 223 days for a cabinet to form, and from my calculations they ruled for only 336 days).

      • Nuclear has been a running sore in Swedish politics since the late 70s. Opposition to it represented the reaction to the classic employer-employee class detente in place since the 1930s where both the dominant Social Democrats and the opposition on the right were broadly in agreement that economic growth == good, and nuclear was a part of that. There was a referendum in the early 80s where the alternatives were classical Swedish: Yes, No, and "No, but we wait a few years".

        Decades have passed, and now being pro-nuclear is very right-coded, and while secretly the current Social Democrats are probably happy that we're supposed to get more electrical power, there's political hay to make opposing the racist shitheads. Add to that that financing this shit actually would mean more expensive electricity I doubt it will remain popular.

    • Rolls-Royce are looking at this as a big sack with a "£" on the side.

209 comments