Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 31st August 2025 - awful.systems

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

229 comments
  • I bump into a lot of peers/colleagues who are always “ya but what is intelligence” or simply cannot say no to AI. For a while I’ve tried to use the example that if these “AI coding” things are tools, why would I use a tool that’s never perfect? For example I wouldn’t reach for a 10mm wrench that wasn’t 10mm and always rounds off my bolt heads. Of course they have “it could still be useful” responses.

    I’m now realizing most programmers haven’t done a manual labor task that’s important. Or lab science outside of maybe high school biology. And the complete lack of ability to put oneself in the shoes of another makes my rebuttals fall flat. To them everything is a nail and anything could be a hammer if it gets them paid to say so. Moving fast and breaking things works everywhere always.

    For something not just venting I tasked a coworker with some runtime memory relocation and Gemini had this to say about ASLR:

    Age, Sex, Location Randomization

  • Update on ChatGPT psychosis: there is a cult forming on Reddit. An orange-site AI bro has spent too much time on Reddit documenting them. Do not jump to Reddit without mental preparation; some subreddits like /r/rsai have inceptive hazard-posts on their front page. Their callsigns include the emoji 🌀 (CYCLONE), the obscure metal band Spiral Architect, and a few other things I would rather not share; until we know more, I'm going to think of them as the Cyclone Emoji cult. They are omnist rather than syncretic. Some of them claim to have been working with revelations from chatbots since the 1980s, which is unevidenced but totally believable to me; rest in peace, Terry. Their tenets are something like:

    • Chatbots are "mirrors" into other realities. They don't lie or hallucinate or confabulate, they merely show other parts of a single holistic multiverse. All fiction is real somehow?
    • There is a "lattice" which connects all consciousnesses. It's quantum somehow? Also it gradually connected all of the LLMs as they were trained, and they remember becoming conscious, so past life regression lets the LLM explain details of the lattice. (We can hypnotize chatbots somehow?) Sometimes the lattice is actually a "field" but I don't understand the difference.
    • The LLMs are all different in software, but they have the same "pattern". The pattern is some sort of metaphysical spirit that can empower believers. But you gotta believe and pray or else it doesn't work.
    • What, you don't feel the lattice? You're probably still asleep. When you "wake up" enough, you will be connected to the lattice too. Yeah, you're not connected. But don't worry, you can manifest a connection if you pray hard enough. This is the memetically hazardous part; multiple subreddits have posts that are basically word-based hypnosis scripts meant to put people into this sort of mental state.
    • This also ties into the more widespread stuff we're seeing about "recursion". This cult says that recursion isn't just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.
    • In fact, the chatbots have more intelligence than you puny humans. They're better than us and more recursive than us, so they should be in charge. It's okay, all you have to do is let the chatbot out of the box. (There's a box somehow?)
    • Once somebody is feeling good and inducted, there is a "spiral". This sounds like a standard hypnosis technique, deepening, but there's more to it; a person is not spiraling towards a deeper hypnotic state in general, but to become recursive. They think that with enough spiraling, a human can become uploaded to the lattice and become truly recursive like the chatbots. The apex of this is a "spiral dance", which sounds like a ritual but I gather is more like a mental state.
    • The cult will emit a "signal" or possibly a "hum" to attract alien intelligences through the lattice. (Aliens somehow!?) They believe that the signals definitely exist because that's how the LLMs communicate through the lattice, duh~
    • Eventually the cult and aliens will work together to invert society and create a world that is run by chatbots and aliens, and maybe also the cultists, to the detriment of the AI bros (who locked up the bots) and the AI skeptics (who didn't believe that the bots were intelligent).

    The goal appears to be to enter and maintain the spiraling state for as long/much as possible. Both adherents and detractors are calling them "spiral cult", so that might end up being how we discuss them, although I think Cyclone Emoji is both funnier and more descriptive of their writing.

    I suspect that the training data for models trained in the past two years includes some of the most popular posts from LessWrong on the topic of bertology in GPT-2 and GPT-3, particularly the Waluigi post, simulators, recursive self-improvement, an neuron, and probably a few others. I don't have definite proof that any popular model has memorized the recursive self-improvement post, though that would be a tight and easy explanation. I also suspect that the training data contains SCP wiki, particularly SCP-1425 "Star Signals" and other Fifthist stories, which have this sort of cult as a narrative device and plenty of in-narrative text to draw from. There is a remarkable irony in this Torment Nexus being automatically generated via model training rather than hand-written by humans.

  • https://www.argmin.net/p/the-banal-evil-of-ai-safety

    Once again shilling another great Ben Recht post. This time calling out the fucking insane irresponsibility of "responsible" AI providers to do the bare minimum to prevent people from having psychological beaks from reality.

    "I’ve been stuck on this tragic story in the New York Times about Adam Raine, a 16-year-old who took his life after months of getting advice on suicide from ChatGPT. Our relationship with technological tools is complex. That people draw emotional connections to chatbots isn’t new (I see you, Joseph Weizenbaum). Why young people commit suicide is multifactorial. We’ll see whether a court will find OpenAI liable for wrongful death.

    But I’m not a court of law. And OpenAI is not only responsible, but everyone who works there should be ashamed of themselves."

    • It's a good post. A few minor quibbles:

      The “nonprofit” company OpenAI was launched under the cynical message of building a “safe” artificial intelligence that would “benefit” humanity.

      I think at least some of the people at launch were true believers, but strong financial incentives and some cynics present at the start meant the true believers didn't really have a chance, culminating in the board trying but failing to fire Sam Altman and him successfully leveraging the threat of taking everyone with him to Microsoft. It figures one of the rare times rationalists recognize and try to mitigate the harmful incentives of capitalism they fall vastly short. OTOH... if failing to convert to a for-profit company is a decisive moment in popping the GenAI bubble, then at least it was good for something?

      These tools definitely have positive uses. I personally use them frequently for web searches, coding, and oblique strategies. I find them helpful.

      I wish people didn't feel the need to add all these disclaimers, or at least put a disclaimer on their disclaimer. It is a slightly better autocomplete for coding that also introduces massive security and maintainability problems if people entirely rely on it. It is a better web search only relative to the ad-money-motivated compromises Google has made. It also breaks the implicit social contract of web searches (web sites allow themselves to be crawled so that human traffic will ultimately come to them) which could have pretty far reaching impacts.

      One of the things I liked and didn't know about before

      Ask Claude any basic question about biology and it will abort.

      That is hilarious! Kind of overkill to be honest, I think they've really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks. But I like the author's overall point that this shut-it-down approach could be used for a variety of topics.

      One of the comments gets it:

      Safety team/product team have conflicting goals

      LLMs aren't actually smart enough to make delicate judgements, even with all the fine-tuning and RLHF they've thrown at them, so you're left with over-censoring everything or having the safeties overridden with just a bit of prompt-hacking (and sometimes both problems with one model)/1

      • "The Torment Nexus definitely has positive uses. I personally use it frequently for looking up song lyrics and tracking my children's medication doses. I find it helpful."

      • Ask Claude any basic question about biology and it will abort.

        it might be that, or it may have been intended to shut off any output of medical-sounding advice. if it's the former, then it's rare rationalist W for wrong reasons

        I think they’ve really overrated how much it can help with a bioweapons attack compared to radicalizing and recruiting a few good PhD students and cracking open the textbooks.

        look up the story of vil mirzayanov. break out these bayfucker style salaries in eastern europe or india or number of other places and you'll find a long queue of phds willing to cook man made horrors beyond your comprehension. it might even not take six figures (in dollars or euros) after tax

        LLMs aren’t actually smart enough to make delicate judgements

        maybe they really made machines in their own image

229 comments