Skip Navigation

Stubsack: weekly thread for sneers not worth an entire post, week ending 16th November 2025

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

175 comments
  • Eurogamer has opinions about genai voices in games.

    Arc Raiders is set in a world where humanity has been driven underground by a race of hostile robots. The contradiction here between Arc Raiders' themes and the manner of its creation is so glaring that it makes me want to scream. You made a game about the tragedy of humans being replaced by robots while replacing humans with robots, Embark!

    https://www.eurogamer.net/arc-raiders-review

  • I’m being shuffled sideways into a software architecture role at work, presumably because my whiteboard output is valued more than my code 😭 and I thought I’d try and find out what the rest of the world thought that meant.

    Turns out there’s almost no way of telling anymore, because the internet is filled with genai listicles on random subjects, some of which even have the same goddamn title. Finding anything from the beforetimes basically involves searching reddit and hoping for the best.

    Anyway, I eventually found some non-obviously-ai-generated work and books, and it turns out that even before llms flooded the zone with shit no-one knew what software architecture was, and the people who opined on it were basically in the business of creating bespoke hammers and declaring everything else to be the specific kind of nails that they were best at smashing.

    Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.

    • The zone has indeed always been flooded, especially since its a title that collides with "integration architect" and other similar titles whose jobs are completely different. That being said, it's a title I've held before, and I really enjoyed the work I got to do. My perspective will be a little skewed here because I specifically do security architecture work, which is mostly consulting-style "hey come look at this design we made is it bad?" rather than developing systems from scratch, but here's my take:

      Architecture is mostly about systems thinking-- you're not as responsible for whether each individual feature, service, component etc is implemented exactly to spec or perfectly correctly, but you are responsible for understanding how they'll fit together, what parts are dangerous and DO need extra attention, and catching features/design elements early on that need to be cut because they're impossible or create tons of unneeded tech debt. Speaking of tech debt, making the call about where its okay to have a component be awful and hacky, versus where v1 absolutely still needs to be bulletproof probably falls into the purvey of architecture work too. You're also probably the person who will end up creating the system diagrams and at least the skeleton of the internal docs for your system, because you're responsible for making sure people who interact with it understand its limitations as well.

      I think the reason so much of the advice on this sort of work is bad or nonexistent is that when you try to boil the above down to a set of concrete practices or checklists, they get utterly massive, because so much of the work (in my experience) is knowing what NOT to focus on, where you can get away with really general abstractions, etc, while still being technically capable enough to dive into the parts that really do deserve the attention.

      In addition to the nice markers and whiteboard, I'd plug getting comfortable with some sort of diagramming software, if you aren't already. There's tons of options, they're all pretty much Fine IMO.

      For reading, I'd suggest at least checking out the first few chapters of Engineering A Safer World , as it definitely had a big influence on how I practice architecture.

    • Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.

      Congratulations, you figured it out! Read Clean Architecture and then ignore the parts you don't like and you'll make it

    • Ugh OK I have to vent:

      I'm getting pushed into more of a design role because oops my company accidentally fired or drove away all of a team of a dozen people except for me after forgetting for a few years that the code I work on is actually mission critical.

      I do my best at designing stuff and delegating the implementation to my coworkers. It's not one of my strengths but there's enough technical debt from when I was solo-maintaining everything for a few years that I know what needs improving and how to improve it.

      But none of my coworkers are domain experts, they haven't been given enough free time for me to train them into domain experts, there's only one of me, and the higher ups are continuously surprised that stuff is going so slow. It's frustrating for everyone involved.

      I actually wouldn't mind architecture or design work in better circumstances since I love to chat with people; but it feels like my employer has put me in an impossible position. At the moment I'm just trying to hang in there for some health insurance reasons; but in a few years I plan to leave for greener pastures where I can go a day without hearing the word "agentic".

    • Michael Hendricks, a professor of neurobiology at McGill, said: “Rich people who are fascinated with these dumb transhumanist ideas” are muddying public understanding of the potential of neurotechnology. “Neuralink is doing legitimate technology development for neuroscience, and then Elon Musk comes along and starts talking about telepathy and stuff.”

      Fun article.

      Altman, though quieter on the subject, has blogged about the impending “merge” between humans and machines – which he suggested would either through genetic engineering or plugging “an electrode into the brain”.

      Occasionally I feel that Altman may be plugged into something that's even dumber and more under the radar than vanilla rationalism.

    • These people aren't looking for scientists, they're looking for alchemists

  • new zitron: ed picks up calculator and goes through docs from microsoft and some others, and concludes that openai has less revenue than thought previously (probably?, ms or openai didn't comment), spends more on inference than thought previously, openai revenue inferred from microsoft share is consistently well under inference costs https://www.wheresyoured.at/oai_docs/

    Before publishing, I discussed the data with a Financial Times reporter. Microsoft and OpenAI both declined to comment to the FT.

    If you ever want to share something with me in confidence, my signal is ezitron.76, and I’d love to hear from you.

    also on ft (alphaville) https://www.ft.com/content/fce77ba4-6231-4920-9e99-693a6c38e7d5

    ed notes that there might be other revenue, but that's only inference with azure, and then there are training costs wherever it is filed under, debts, commitments, salaries, marketing, and so on and so on

    e: fast news day today eh?

  • "That's a lovely kernel you have there how about if we improve it a bit with some AI."

    Slop coding is coming to the Linux kernel. 🤮

    • Ah, the site requires me to agree to "Data processing by advertising providers including personalised advertising with profilingConsent" and that this is "required for free use". A blatant GDPR violation, love-lyy!

      • Don't worry about it. GDPR is getting gutted and we also preemptively did anything we could to make our data protection agencies toothless. Rest assured citizen, we did everything we could to ensure your data is received by Google and Meta unimpeded. Now could someone do something about that pesky Max Schrems guy? He keeps winning court cases.

  • Omg is claude down? ::: spoiler because I'm gonna steal his shoes. :::

    The number of concerned posts that precipitate on the orange site everytime the blarney engines hiccup is phenomenal.

    One of these days, it aint coming back.

  • Synergies!

    Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.

    Reactor licensing is a simple mechanisable form filling exercise, y’know.

    “Please draft a full Environmental Review for new project with these details,” Microsoft’s presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for “review and refinement.” At the end of Microsoft’s imagined process, it would have “Licensing documents created with reduced cost and time.”

    https://www.404media.co/power-companies-are-using-ai-to-build-nuclear-power-plants/

    (Paywalled, at least for me)

    Ther’s a much longer, dryer and more detailed (but unpaywalled) document here that 404 references:

    https://ainowinstitute.org/publications/fission-for-algorithms

175 comments