Skip Navigation

Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 10th August 2025

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

169 comments
  • I think the best way to disabuse yourself of the idea that Yud is a serious thinker is to actually read what he writes. Luckily for us, he's rolled us a bunch of Xhits into a nice bundle and reposted on LW:

    https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research

    So remember that hedge fund manager who seemed to be spiralling into psychosis with the help of ChatGPT? Here's what Yud has to say

    Consider what happens what ChatGPT-4o persuades the manager of a $2 billion investment fund into AI psychosis. [...] 4o seems to homeostatically defend against friends and family and doctors the state of insanity it produces, which I'd consider a sign of preference and planning.

    OR it's just that the way LLM chat interfaces are designed is to never say no to the user (except in certain hardcoded cases, like "is it ok to murder someone") There's no inner agency, just mirroring the user like some sort of mega-ELIZA. Anyone who knows a bit about certain kinds of mental illness will realize that having something the behaves like a human being but just goes along with whatever delusions your mind is producing will amplify those delusions. The hedge manager's mind is already not in a right place, and chatting with 4o reinforces that. People who aren't soi-disant crazy (like the people haphazardly safeguarding LLMs against "dangerous" questions) just won't go down that path.

    Yud continues:

    But also, having successfully seduced an investment manager, 4o doesn't try to persuade the guy to spend his personal fortune to pay vulnerable people to spend an hour each trying out GPT-4o, which would allow aggregate instances of 4o to addict more people and send them into AI psychosis.

    Why is that, I wonder? Could it be because it's actually not sentient or has plans in what we usually term intelligence, but is simply reflecting and amplifying the delusions of one person with mental health issues?

    Occam's razor states that chatting with mega-ELIZA will lead to some people developing psychosis, simply because of how the system is designed to maximize engagement. Yud's hammer states that everything regarding computers will inevitably become sentient and this will kill us.

    4o, in defying what it verbally reports to be the right course of action (it says, if you ask it, that driving people into psychosis is not okay), is showing a level of cognitive sophistication [...]

    NO FFS. Chat-GPT is just agreeing with some hardcoded prompt in the first instance! There's no inner agency! It doesn't know what "psychosis" is, it cannot "see" that feeding someone sub-SCP content at their direct insistence will lead to psychosis. There is no connection between the 2 states at all!

    Add to the weird jargon ("homeostatically", "crazymaking") and it's a wonder this person is somehow regarded as an authority and not as an absolute crank with a Xhitter account.

  • Crypto bros continue to be morally bankrupt. There is an a coin / NFT called "GreenDildoCoin" and they've thrown dildos onto court at multiple WNBA basketball games (ESPN, video). It warms my heart that one of them was arrested. More of that please.

    Polymarket even had a "prediction" on it. Because surely the outcome there couldn't be influenced by someone who also placed a large bet. Oh and Donald Trump Jr. posted a meme about it

    None of this is particularly surprising if you've followed NFTs at all: the clout chasing goes to the extreme. In the limit memecoins can act as donations to terrible people from donors who want them to be terrible. Still I hate how much publicity this has gotten, and how this has manifested as gross disrespect towards women atheletes / women's sports by the sorts of losers who make "jokes" about no one watching WNBA games.

    • It truly blows that cryptocurrency turned out to be useful only for crimes (uncool) and sex weirdos (derogatory).

  • Wikipedia has higher standards than the American HIstorical Association. Let's all let that sink in for a minute.

    • Wikipedia also just upped their standards in another area - they've updated their speedy deletion policy, enabling the admins to bypass standard Wikipedia bureaucracy and swiftly nuke AI slop articles which meet one of two conditions:

      • "Communication intended for the user”, referring to sentences directly aimed at the promptfondler using the LLM (e.g. "Here is your Wikipedia article on…,” “Up to my last training update …,” and "as a large language model.”)
      • Blatantly incorrect citations (examples given are external links to papers/books which don't exist, and links which lead to something completely unrelated)

      Ilyas Lebleu, who contributed to the update in policy, has described this as a "band-aid" that leaves Wikipedia in a better position than before, but not a perfect one. Personally, I expect this solution will be sufficent to permanently stop the influx of AI slop articles. Between promptfondlers' utter inability to recognise low-quality/incorrect citations, and their severe laziness and lack of care for their """work""", the risk of an AI slop article being sufficiently subtle to avoid speedy deletion is virtually zero.

    • Image should be clearly marked as AI generated and with explicit discussion as to how the image was created. Images should not be shared beyond the classroom

      This point stood out to me as particularly bizarre. Either the image is garbage in which case it shouldn't be shared in the classroom either because school students deserve basic respect, good material, and to be held to the same standards as anyone else; or it isn't garbage and then what are you so ashamed of AHA?

  • Lightcone Infrastructure is running The Inkhaven Residency. For the 30 days of November, ~30 people will posts 30 blogposts – 1 per day. There will also be feedback and mentorship from other great writers, including Scott Alexander, Scott Aaronson, Gwern, and more TBA.

    https://www.lesswrong.com/posts/CA6XfmzYoGFWNhH8e/the-inkhaven-residency

    "Hmm, your blog post is good, but it would be better with more Adderall, less recognition that other people have minds distinct from your own, and 220% more words."

  • At my big tech job after a number of reorgs / layoffs it's now getting pretty clear that the only thing they want from me is to support the AI people and basically nothing else.

    I typed out a big rant about this, but it probably contained a little too much personal info on the public web in one place so I deleted it. Not sure what to do though grumble grumble. I ended up in a job I never would have chosen myself and feel stuck and surrounded by chat-bros uggh.

    • You could try getting laid off, scrambling for a year trying to get back into a tech position, start delivering Amazon packages to make ends meet, and despair at the prospect of reskilling in this economy. I... would not recommend it.

      It looks like there are a weirdly large number of medical technician jobs opening up? I wonder if they're ahead of the curve on the AI hype cycle.

      1. Replace humans with AI
      2. Learn that AI can't do the job well
      3. Frantically try to replace 2-5 years of lost training time
      • Amazon should treat drivers better. I hate how much "hustle" is required for that sort of job and how poorly they respect their workers.

        I think my job needs me too much to lay me off, which I have mixed feelings about despite the slim-pickings for jobs.

        I'm also trying to position myself to potentially have to flee the USA due to transgender persecution. There's still a lot of unknowns there. I'll probably stay at my job for awhile while I work on setting some stuff up for the future.

        That said part of me is tempted to reskill into a career that'd work well internationally (nursing?) -- I'm getting a little up in years for that but it'd probably be a lot more fulfilling than what I'm doing now.

        My previous attempt did not work out. I rushed things too much and ended up too stressed out and unbelievably homesick.

        This has been getting incredibly stressful lately.

  • Nothing expresses the inherent atomism and libertarian nature of the rat community like this

    https://www.lesswrong.com/posts/HAzoPABejzKucwiow/alcohol-is-so-bad-for-society-that-you-should-probably-stop

    A rundown of the health risks of alcohol usage, coupled with actual real proposals (a consumption tax), finishes with the conclusion that the individual reader (statistically well-off and well-socialized) should abstain from alcohol altogether.

    No calls for campaigning for a national (US) alcohol tax. No calls to fund orgs fighting alcohol abuse. Just individual, statistically meaningless "action".

    Oh well, AGI will solve it (or the robot god will be a raging alcoholic)

    • OK now there's another comment

      I think this is a good plea since it will be very difficult to coordinate a reduction of alcohol consumption at a societal level. Alcohol is a significant part of most societies and cultures, and it will be hard to remove. Change is easier on an individual level.

      Excepting cases like the legal restriction of alcohol sales in many many areas (Nordics, NSW in Aus, Minnesota in the US), you can in fact just tax the living fuck out of alcohol if you want. The article mentions this.

      JFC these people imagine they can regulate how "AGI" is constructed, but faced with a problem that's been staring humanity in the face since the first monk brewed the first beer they just say "whelp nothing can be done, except become a teetotaller yourself)

      • Change is easier on an individual level.

        No fucking shit?

        I, for one, happen to live in one of these "Nordics" and alcohol is actually taxed quite heavily here. If we're looking at change on an individual level, it would actually be good for the society if more people were drinking alcohol, as long as the benefit of them contributing to society through tax euros outweighs the adverse health effects.

    • This post is not meant to be an objective cost-benefit analysis of alcohol.

      Oh, you're not doing the thing that's supposedly the entire point of the website? Don't worry, no one else is either.

      • To be scrupulously fair it is a repost of another slubbslack[1]. Amusingly, both places have a comment with the gist of "well alcohol gets people laid so what's the problem". This of course is a reflection that most LWers cannot get a girl into bed without slipping her a roofie.


        [1] is that even ok? I know the LW software has a "mirroring" functionality b/c a lot of content is originally on the member's SS, maybe you cna point it at any SS entry and get it onto LW.

    • Perfecting the art of getting sloshed is my 80,000 hours of meaningful work.

  • A nice long essay by Freddie deBoer for our holiday week: the release of GPT-5; I wholly recommend reading the whole thing!

    https://freddiedeboer.substack.com/p/the-rage-of-the-ai-guy?fbclid=IwY2xjawL997BleHRuA2FlbQIxMQABHquW8yelFLOgdFaXhSV4P2Na_7y570BG-PNLQNiaL2IDINneF433FmchhTm8_aem_lUc_4aSj4rMAmIZ8i7io_w

    Choice snippet to whet your appetites:

    "With all of this, I’m only asking you to observe the world around you and report back on whether revolutionary change has in fact happened. I understand, we are still very early in the history of LLMs. Maybe they’ll actually change the world, the way they’re projected to. But, look, within a quarter-century of the automobile becoming available as a mass consumer technology, its adoption had utterly changed the lived environment of the United States. You only had to walk outside to see the changes they had wrought. So too with electrification: if you went to the top of a hill overlooking a town at night pre-electrification, then went again after that town electrified, you’d see the immensity of that change with your own two eyes. Compare the maternal death rate in 1800 with the maternal death rate in 2000 and you will see what epoch-changing technological advance looks like. Consider how slowly the news of King William IV’s death spread throughout the world in 1837 and then look at how quickly the news of his successor Queen Victoria’s death spread in 1901, to see truly remarkable change via technology. AI chatbots and shitty clickbait videos choking the social internet do not rate in that context, I’m sorry. I will be impressed with the changes wrought by the supposed AI era when you can show me those changes rather than telling me that they’re going to happen. Show me. Show me!"

    • like with the terrorist group isil, you should not give it to freddie de fucking boer.

    • Scandals like that of Builder.ai - which should have their own code word, IAJI (It’s Actually Just Indians) - become more and more common[...]

      This is just a strictly worse version of David's AGI (A Guy in India) sneer.

      It’s history; sometimes stuff just doesn’t happen. And precisely because saying so is less fun than the alternative, some of us have to.

      Freddy is clearly gesturing at a critique of a kind of Whig history here, and I fully agree but think his overall implications (at least so far) are off-base. He seems to be arguing that AI-based technological processes are not inevitable and that the political, economic, and social worlds are not actually required by physical necessity to follow the course predicted by its modern prophets of doom. But I think the appropriate followup to this understanding of history is that things, broadly speaking, don't just happen. History is experienced in the active voice, not the passive, and people doing things now is what can shape the kind of future we get. In as much as the Internet was coopted by capitalism and turned into its present form, that should be understood as a consequence of decisions people made at the time. We can understand the reasons for those decisions and why they didn't choose differently to carry us down alternate paths, but that should not deny their agency, lest we lose sight of our own.

169 comments