Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 12th October 2025

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

183 comments
  • After kinda fence-sitting on the topic of AI in general for while, Hank Green is having a mental breakdown on YouTube over Sora2 and it's honestly pretty funny.

    If you're the kind of motherfucker who will create SlopTok, you are not the kind of motherfucker who should be in charge of OpenAI.

    Not that anyone should be in charge of that shitshow of a company, but hey!

    Bonus sneer from the comment section:

    Sam Altman in Feb 2015: "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

    Sam Altman in Dec 2015, after co-founding OpenAI: "Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."

    Sam Altman 4 days ago, on his personal blog: "we are going to have to somehow make money for video generation."

    • After kinda fence-sitting on the topic of AI in general for while, Hank Green is having a mental breakdown on YouTube over Sora2 and it’s honestly pretty funny.

      I don't see much to laugh at here myself. Hank may have been a massive fencesitter on AI, but I still think his reaction to Sora's completely goddamn justified. This shit is going to enable scams, misinformation and propaganda on a Biblical fucking scale, and undermine the credibility of video evidence for good measure.

      Got another bonus sneer from the comments as well:

      Polluting human knowledge with crap, making internet useless, taking away jobs from creative people by making things that look creative enough. Governments are complicit, politicians are bribed. Like that suck-up youtuber [Two Minute Papers] repeats, "What a time to be alive" right ?

      (Sidenote: It massively fucking sucks how Two Minute Papers drank the AI Kool-Aid, I used to love that channel.)

      • I don’t see much to laugh at here myself. Hank may have been a massive fencesitter on AI, but I still think his reaction to Sora’s completely goddamn justified. This shit is going to enable scams, misinformation and propaganda on a Biblical fucking scale, and undermine the credibility of video evidence for good measure.

        No, it's absolutely justified and I agree with basically everything he says in the video (esp. the title, there is really no reason for technology like this to exist in the hand of the public, there's zero upsides to it). It's just funny to me because the video is just so different from his usual calm stuff.

        But honestly, good for him and (hopefully) his community too.

  • A major Australian university used artificial intelligence technology to accuse about 6,000 students of academic misconduct last year.

    The most common offence was using AI to cheat, but many of the students had done nothing wrong.

    https://www.abc.net.au/news/2025-10-09/artificial-intelligence-cheating-australian-catholic-university/105863524

    • Man, I hope it doesn't become good advice to suggest that students screen record all their classwork to avoid this sort of shit, but I have a sinking feeling.

      • Well, now the students have incentive to record all the stages of their progress. And the professors have incentive to assign projects in stages, each one of which has a tangible output. It's the golden age of the index card, baby! Fill out one for each idea or claim you want to use from a book you read, wrap the stack with a rubber band... This oddly dovetails with the professors' obligations, since now we have to teach how to do research all over again.

  • In the Financial Times:

    The hundreds of billions of dollars companies are investing in AI now account for an astonishing 40 per cent share of US GDP growth this year. And some analysts believe that estimate doesn’t fully capture the AI spend, so the real share could be even higher.

    AI companies have accounted for 80 per cent of the gains in US stocks so far in 2025. That is helping to fund and drive US growth, as the AI-driven stock market draws in money from all over the world, and feeds a boom in consumer spending by the rich.

    Since the wealthiest 10 per cent of the population own 85 per cent of US stocks, they enjoy the largest wealth effect when they go up. Little wonder then that the latest data shows America’s consumer economy rests largely on spending by the wealthy. The top 10 per cent of earners account for half of consumer spending, the highest share on record since the data begins.

  • An investor runs the numbers of AI capex and is not impressed

    (n.b. I have no idea who this guy is or his track record (or even if he's a dude) but I think the numbers check out and the parallells to railroads in the 19th century are interesting too)

    Global Crossing Is Reborn…

    Now, I think AI grows. I think the use-cases grow. I think the revenue grows. I think they eventually charge more for products that I didn’t even know could exist. However, $480 billion is a LOT of revenue for guys like me who don’t even pay a monthly fee today for the product. To put this into perspective, Netflix had $39 billion in revenue in 2024 on roughly 300 million subscribers, or less than 10% of the required revenue, yet having rather fully tapped out the TAM of users who will pay a subscription for a product like this. Microsoft Office 365 got to $ 95 billion in commercial and consumer spending in 2024, and then even Microsoft ran out of people to sell the product to. $480 billion is just an astronomical number.

    Of course, corporations will adopt AI as they see productivity improvements. Governments have unlimited capital—they love overpaying for stuff. Maybe you can ultimately jam $480 billion of this stuff down their throats. The problem is that $480 billion in revenue isn’t for all of the world’s future AI needs, it’s the revenue simply needed to cover the 2025 capex spend. What if they spend twice as much in 2026?? What if you need almost $1 trillion in revenue to cover the 2026 vintage of spend?? At some point, you outrun even the government’s capacity to waste money (shocking!!)

    An AI Addendum

    As a result, my blog post seems to have elicited a liberating realization that they weren’t alone in questioning the math—they’ve just been too shy to share their findings with their peers in the industry. I’ve elicited a gnosis, if you will. As this unveiling cascaded, and they forwarded my writings to their friends, an industry simultaneously nodded along. Personal self-doubts disappeared, and high-placed individuals reached out to share their epiphanies. “None of this makes sense!!” “We’ll never earn a return on capital!!” “We’ve been wondering the same thing as you!!”

    [...]

    Remember, the industry is spending over $30 billion a month (approximately $400 billion for 2025) and only receiving a bit more than a billion a month back in revenue. The mismatch is astonishing, and this ignores that in 2026, hundreds of billions of additional datacenters will get built, all needing additional revenue to justify their existence. Adding the two years together, and using the math from my prior post, you’d need approximately $1 trillion in revenue to hit break even, and many trillions more to earn an acceptable return on this spend. Remember again, that revenue is currently running at around $15 to $20 billion today.

  • Is it just me or does it feel there's a concerted effort to boost the AT protocol in tech venues? Maybe I'm paranoid but it does feel like a bit of openwashing going on.

    • I'm not really concerned about it. The overlap between people who give a shit about AT and people who don't already use some kind of ActivityPub platform is microscopic. I'm happy to let Bluesky shoot itself in the foot by adopting the number one main thing people complain about Mastodon, namely the existence of multiple instances and having to choose from among them.

      AIUI the AT protocol is in fact a bona fide open protocol with a Free (MIT/Apache-2.0) reference implementation available. If this is openwashing, I welcome this new style of openwashing where you actually publish open source software instead of just implying your proprietary software is not really proprietary.

      • By "openwashing" I mean the posts about AT protocol are running cover for Bluesky, the company. It's basically reinforcing their narrative that if you don't like what they're doing, "just" start your own PDS. By focussing on the technical nitty-gritty, these posts ignore the structures in place keeping Bluesky in the dominant position.

        An analogy, Bitcoin code is also open, but 1% of coin owners own like 90% of the coins. I'm not making any excuses for BTC here, but I seem to remember a bunch of similar articles breathlessly explaining how BTC "solves the Byzantine generals problem" while totally ignoring the ownership profile.

      • AT is fashtech. This needs a proper writeup I realise, but it ticks too many boxes in theory and practice. I don't welcome this style of openwashing, where it's COMPLETELY OPEN except in all practice. Like, you could say the same about Urbit.

  • Found a pretty good sneer against vibe coding: The Programmer Identity Crisis

    • This author touches on a point that dovetails with my thinking:

      Dijkstra, in “On the foolishness of ‘natural language programming’,” wrote, rather poignantly: “We have to challenge the assumptions that natural languages would simplify work.” And: “The virtue of formal texts is that their manipulation, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid.”

      I think it likely that these tools will not be judged, in the long term, by the ambitions and hopes of the AGI cultists and hype-men, but by comparison to the many other attempts at natural-language programming in English. Smalltalk, Visual Basic, I even want to throw in AppleScript, as simple and threadbare as it was. How are all of these doing now?

      AppleScript has been complemented or perhaps superseded by at least two more graphically-oriented attempts at system automation targeted at non-technical users. One could argue that its falloff came from an imperfect marriage with the message-passing/service-oriented architecture based on Objective-C and inherited from NeXT in Mac OS X, a system design which is itself now vestigial. The comparison with LLM coding assistants is imperfect, as they seem to be typically targeted at the more granular level of the class or the method, rather than explicit high-level hooks in an application. A better comparison here would be the last year or so worth of "AI agents," but, uhm, ahh...

      Smalltalk seemed to have a pretty big boom in the late 80s/early 90s, but tapered off rapidly after that. I like the more modern implementation of Pharo well enough, but it strives to throw in everything and the kitchen sink, with a downright balk-worthy amount of packages listed when you open up the class browser. On top of that, a few weeks ago I noticed someone in their Discord telling a newbie that current good practice is to file out your code every once in a while and then start over with a fresh image, as various background processes in stock images typically become unstable over time. This is orthogonal to the natural-language-like design, but it is a stumbling block to the sense of "liveness" and interactivity that is similarly a big hook for LLM assistance. Furthermore, as far as I know, they still don't have a stable answer for system-level parallelism in the VM. All I've seen is a rather awkward technique for spinning off tree-shaken child VMs if there's some method you want to run in parallel. You've got to really love Smalltalk to want to work past that shortcoming!

      VB.NET I can't really speak to, except that it seems Microsoft now considers it a stable language with little if any new feature development. The original implementation never seemed to have a good rep for maintainability, and the very idea of native Forms seems out of fashion compared to JavaScript web-app frontends. And the land of JavaScript, of course, seems to be the most fertile and uncontested kingdom of LLM coding assistance. I'm genuinely interested to hear more experiences with modern VB, as it strikes me as the last great corporate-sanctioned push for non-technical users to build their own apps, and thus the most worthy comparison.

      All this is to say that each of these previous attempts at natural-language programming haven't bit-rotted too hard, implementations are still available and you can probably salvage a legacy project with some effort. But each of them have been sidelined by industry over time. Not necessarily because of Dijkstra's objection to the ambition of approaching natural language, although I don't think we can totally discount that as a factor. But other technical or platform restrictions certainly hamstrung each of them. And LLM tools are still mostly API-based SaaS, which always has the glaring technical vulnerability of the provider running out of money. Yes, people will still pursue local models, but the bubble bursting could do a lot more harm to this approach than proponents anticipate.

      • an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid.

        Yeah like convincing people to start to count at zero, causing billions in damages by off by one errors. Dijkstaaaa!!!

        (Im just making a joke/doing a bit here, I dont blame him for off by one errors, counting at zero isnt even the big one I think (more logic errors). Just always find it funny that he wrote a article on why we should start to count at zero. Sorry I dont have any useful input).

        E: perhaps some input. Not sure if coding in natural language is ever really going to be viable in serious projects, as at the end of the day it needs to be converted to machine code. And there will be mismatch. Same like writing the law like code. There also is a mismatch there.

      • There's plenty of more recent pushes to allow non-technical users to build apps, more than are countable. As far as "great" ones, maybe Azure Logic Apps? It's Microsoft's option for low/no code automation in Azure. It's all code under the hood, but it mainly works as premade blocks you drag and drop, and connect like a flow chart. Pretty sure it's event driven. Most blocks have some drop down options and settings to fill in the blanks of. I think you can also just have some code as a block too.

        Haven't used it myself, just had to help support some of the input, output, and governance. Also have seen it brought up a bunch in Azure certification paths (work has a requirement of some training courses each year, and unfortunately those are the most relevant ones offered through the vendor we have a deal with).

  • Mildly interesting thread about the progress of blacksky: https://bsky.app/profile/did:plc:w4xbfzo7kqfes5zb7r6qv3rw/post/3m2n62lzbeu2p

    They’re aiming for full independence from bluesky, which is a laudable goal though not one they’ve achieved yet. They’re currently getting a reasonable amount of user funding rather than being a typical vc furnace (https://opencollective.com/blacksky) but I’m not sure what their plan is for moderation which is what will carry the project in the long term. I’d like to say it can’t be worse than bluesky, but moderation at scale is a nightmare.

    • one nice thing is that the migration process to the bluesky is painless and straightforward. if we had the ability to not only take our toots (export does exist) but also move them to a new mastodon server, that would be a very nice boon for the fediverse. (it's also one of the oldest open tickets in mastodon's github issues. and yes, i know of slurp, but that's not really frictionless.)

  • How do we make the experience of dating apps even worse? With AI, of course:

    https://www.theguardian.com/lifeandstyle/2025/oct/12/chatgpt-ed-into-bed-chatfishing-on-dating-apps

    The funniest bit is the guy who needed multiple exchanges with the ocean-boiling slop machine to come up with "Hey Sarah, it was lovely to meet you".

    • “I’d already been ChatGPT-ed into bed at least once. I didn’t want it to happen again.”

      According to a 2024 YouGov poll, for instance, around half of Americans aged 18-34 reported having been, like Holly, in a situationship (a term it defines as “a romantic connection that exists in a gray area, neither strictly platonic nor officially a committed relationship”).

      “Over the course of a week, I realised I was relying on it quite a lot,” she says. “And I was like, you know what, that’s fine – why not outsource my love life to ChatGPT?”

      She describes being on the receiving end of the kinds of techniques that Jamil uses – being drilled with questions, “like you’re answering an HR questionnaire”, then off the back of those answers “having conversations where it feels as if the other person has a tap on my phone because everything they say is so perfectly suited to me”.

183 comments