Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 19th October 2025

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

156 comments
  • Hey, remember Sabine Hossenfelder? The transphobe who makes YouTube videos? She published a physics paper! Well, OK, she posted a thing to the arXiv for the first time since January 2024. I read it, because I've been checking the quant-ph feed on a daily basis for years now, and reading anything else is even more depressing. It's vague, meandering glorp that tries to pretty up a worldview that amounts to renouncing explanation and saying everything happens because Amon-Ra wills it. Two features are worth commenting upon. The acknowledgments say,

    I acknowledge help from ChatGPT 5 for literature research as well as checking this manuscript. I swear I actually wrote it myself.

    "Tee hee, I shut off my higher brain functions" is a statement that should remain in the porn for those who have a fetish for that.

    And what literature does Hossenfelder cite? Well, there's herself, of course, and Tim Palmer (one of those guys who did respectable work in his own field and then decided to kook out about quantum mechanics). And ... Eric Weinstein! The very special boy who dallied for a decade before writing a paper on his revolutionary theory and then left his equations in his other pants. Yes, Hossenfelder has gone from hosting a blog post that dismantled "Geometric Unity" to citing it as a perfectly ordinary theory.

    If she's not taking Thielbux, she's missing an opportunity.

  • More AI bullshit hype in math. I only saw this just now so this is my hot take. So far, I'm trusting this r/math thread the most as there are some opinions from actual mathematicians: https://www.reddit.com/r/math/comments/1o8xz7t/terence_tao_literature_review_is_the_most/

    Context: Paul Erdős was a prolific mathematician who had more of a problem-solving style of math (as opposed to a theory-building style). As you would expect, he proposed over a thousand problems for the math community that he couldn't solve himself, and several hundred of them remain unsolved. With the rise of the internet, someone had the idea to compile and maintain the status of all known Erdős problems in a single website (https://www.erdosproblems.com/). This site is still maintained by this one person, which will be an important fact later.

    Terence Tao is a present-day prolific mathematician, and in the past few years, he has really tried to take AI with as much good faith as possible. Recently, some people used AI to search up papers with solutions to some problems listed as unsolved on the Erdős problems website, and Tao points this out as one possible use of AI. (I personally think there should be better algorithms for searching literature. I also think conflating this with general LLM claims and the marketing term of AI is bad-faith argumentation.)

    You can see what the reasonable explanation is. Math is such a large field now that no one can keep tabs on all the progress happening at once. The single person maintaining the website missed a few problems that got solved (he didn't see the solutions, and/or the authors never bothered to inform him). But of course, the AI hype machine got going real quick. GPT5 managed to solve 10 unsolved problems in mathematics! (https://xcancel.com/Yuchenj_UW/status/1979422127905476778#m, original is now deleted due to public embarrassment) Turns out GPT5 just searched the web/training data for solutions that have already been found by humans. The math community gets a discussion about how to make literature more accessible, and the rest of the world gets a scary story about how AI is going to be smarter than all of us.

    There are a few promising signs that this is getting shut down quickly (even Demis Hassabis, CEO of DeepMind, thought that this hype was blatantly obvious). I hope this is a bigger sign for the AI bubble in general.

    EDIT: Turns out it was not some rando spreading the hype, but an employee of OpenAI. He has taken his original claim back, but not without trying to defend what he can by saying AI is still great at literature review. At this point, I am skeptical that this even proves AI is great at that. After all, the issue was that a website maintained by a single person had not updated the status of 10 problems inside a list of over 1000 problems. Do we have any control experiments showing that a conventional literature review would have been much worse?

  • Somehow I missed the fact that yesterday paypal’s blockchain operator fucked up and accidentally minted 300 trillion itchy and scratchy coins.

    https://www.web3isgoinggreat.com/?id=paxos-accidental-mint

    And now apparently it turns out that it was just a sequence of stupid whereby they accidentally deleted 300 million, which would have been impressive all by itself, then tried to recreate it (🎶 but at least it isn’t fiat currency🎶) and got the order of magnitude catastrophically wrong and had to delete that before finally undoing their original mistake. Future of finance right here, folks.

    Anyone else know the grisly details? The place I heard it from is a mostly-private account on mastodon which isn’t really shareable here, and they didn’t say where they’d heard it.

  • (cross posting here and sneer club)

    I regret to inform you, another Anthropic cofounder has written an essay about Claude fondling.

    "Anthropic cofounder admits he is now "deeply afraid" ... "We are dealing with a real and mysterious creature, not a simple and predictable machine ... We need the courage to see things as they are."

    https://www.reddit.com/r/ArtificialInteligence/comments/1o6cow1/anthropic_cofounder_admits_he_is_now_deeply/?share_id=_x2zTYA61cuA4LnqZclvh

    There's so many juicy chunks here.

    "I came to this position uneasily. Both by virtue of my background as a journalist and my personality, I’m wired for skepticism...

    ...You see, I am also deeply afraid. It would be extraordinarily arrogant to think working with a technology like this would be easy or simple....

    ...And let me remind us all that the system which is now beginning to design its successor is also increasingly self-aware and therefore will surely eventually be prone to thinking, independently of us, about how it might want to be designed. Of course, it does not do this today. But can I rule out the possibility it will want to do this in the future? No."

    Despite my jests, I gotta say, posts reeks of desperation. Benchmaxxxing just isn't hitting like it used, bubble fears at all time high, and OAI and Google are the ones grabbing headlines with content generation and academic competition wins. The good folks at Anthropic really gotta be huffing your own farts to be believing they're in the race to wi-

    "Years passed. The scaling laws delivered on their promise and here we are. And through these years there have been so many times when I’ve called Dario up early in the morning or late at night and said, 'I am worried that you continue to be right'. Yes, he will say. There’s very little time now."

    LateNightZoomCallsAtAnthropic dot pee en gee

    Bonus sneer: speaking of self aware wolves, Jagoff Clark somehow managed to updoot Doom's post?? Thinking the frog was unironically endorsing his view that the server farm was going to go rogue???? Will Jack achieve self awareness in the future? Of course, he does not do this today. But can I rule out the possibility he will do this in the future? Yes.

  • Good news everyone, there's 2 bonkers pieces about the stars and the galaxy on LW right now!

    Here's a dude very worried about how comets impacting the sun could cause it to flare and scorch the earth. Nothing but circumstantial evidence, and GenAI researched to boot. Appeared in the EA forum as part of their "half-baked ideas" amnesty

    https://www.lesswrong.com/posts/9gAksZ25wbvfS8FAT/a-new-global-risk-large-comet-s-impact-on-sun-could-cause

    The only thing I'd note about this is that even if the comet strikes along the plane of eliptic (not an unreasonable assumption), the planet would still have to be exactly in the right place for this assumed plume of energy to do any damage. And if it hits the Sahara or the Pacific, NBD presumably.

    (Edit turns out the above is just the abstract, the full piece is here:

    https://docs.google.com/document/d/1OHgc7Q4git6OfDNTE_TDf9fFNgrEEnCUfnPMIwbK3vg/edit?usp=sharing)

    Then there's this person looking really far ahead into how to get energy from the universe

    https://www.lesswrong.com/posts/YC4L5jxHnKmCDSF9W/some-astral-energy-extraction-methods

    Tying galaxies together: Anchor big rope to galaxies as they get pulled apart by dark matter. Build up elastic potential energy which can be harvested. Issue: inefficient. [...] Not clear (to me) how you anchor rope to the galaxies.

    Neutrino capture: Lots of neutrinos running around, especially if you use hawking radiation to capture mass energy of black holes. So you might want to make use of them. But neutrinos are very weakly interacting, so you need dense matter to absorb their energy/convert them to something else. Incredibly dense. To stop one neutrino with lead you need 1 lightyear of matter, with a white dwarf you need an astronomical unit, and for a neutron star (1017 kg/m3 density, 10km radium) you need 340 meters of matter. So neutrino capture is feasible,

    (my emphasis)

    Black Hole Bombs: Another interesting way of extracting energy from black holes are superradiant instabilities, i.e. making the black hole into a bomb. You use light to extract angular momentum from the blackhole, kinda like the Penrose process, and get energy out. With a bunch of mirrors, you can keep reflecting the light back in and repeat the process. This can produce huge amounts of energy quickly, on the order of gamma ray bursts for stellar mass black holes. Or if you want it to be quicker, you can get 1% of the blackholes mass energy out in 13 seconds. How to collect this is unclear.

    (again, my emphasis)

    Same author has a recent post titled "Don't Mock Yourself". Glad to see they've taken this advice to heart and outsourced the mocking.

    • Disclaimer: abstract above, content and main ideas are human-written; the full text below is written with significant help of AI but is human-verified as well as by other AIs.

      "Oh, that pizza sauce recipe that calls for glue? It's totally OK, I checked it out with MechaHitler."

  • Interesting developments reported by ars technica: Inside the web infrastructure revolt over Google’s AI Overviews

    I don’t think any of this is actually good news for the people who’re actually suffering the effects of ai scraping and bullshit generation, but I do think it is a good idea that someone with sufficient clout is standing up to google et al and suggesting that they can’t just scrape al the things, all the time, and then screw the source of all their training data.

    I’m somewhat unhappy that it is cloudflare doing this, a company who have deeply shitty politics and an unpleasantly strong grasp on the internet already. I very much do not want the internet to be divided into cloudflare customers, and the slop bucket.

  • this has been previously sneered on, but only now i've clocked this: AI2027 is another name for, or elaboration of "san francisco consensus", from april this year or so, "named so because everyone who believes in it is in san francisco", more precisely timeline roughly matches and it hinges on iterative self-improvement

    which might just mean it was laundered through chatbots to bulk it up initially. what i mean to say is even openai's dooming might not be original

    so if anyone's counting then this thing is a couple of months older

    • "Everyone who believes this lives in San Francisco" is the pure poetry of self-owns.

      • i'm not sure at this point if it was some (then) ea-er inside belief that eric schmidt (they guy who said that, formerly google) was exposed to, and he sneered at it immediately, then it was shown to the world in length proper to rationalists

156 comments