Skip Navigation

Posts
4
Comments
228
Joined
3 yr. ago

It's not always easy to distinguish between existentialism and a bad mood.

  • Microslop exec floats the idea that companies should be required to buy additional software licenses for each AI agent

    "All of those embodied agents are seat opportunities," Jha said, envisioning organizations with more agents than humans — each effectively a user that must pay for a software license, or "seat" in industry lingo.

    A company with 20 employees might buy 20 Microsoft 365 licenses today. If each employee gets five AI agents, and the workforce shrinks to 10 people, that could still mean 50 paid seats.

    Also, it's apparently enough for an LLM endpoint to be paired with an email inbox to be considered an "embodied agent", words mean nothing.

  • That rationalism-slobbering Sam Kriss article from a short while back also namedropped it.

  • According to the claude code leak the state of the art is to be, like, really stern and authoritative when you are begging it to do its job:

  • “Throw insane amounts of compute at some developer fan fiction and hope for the best.” is such a good description of vibe coding.

  • Oh jolly can't wait for this to go viral enough that my boss schedules time to ask me about it.

    The tumblr thread is a must read if you've ever been near HIPAA regulated infrastructure.

  • This account is just that sort of shit 24/7, just constant linkedin lunacy that everyone should treat as rage bait and move on.

  • exciting new roles of liquid management

    algorithmic uh sovereignity

    fantastic

  • Sam Altman wants his eye scanning crypto bullshit to be used to verify AI agents so he can save the internet from himself.

    Rather than blocking automated traffic outright as a safety or data-protection measure, World [previously world coin] suggests sites could instead require AI agents to present an associated World ID token to prove they represent an actual human who’s behind any request. In this way, the site could allow agents to access limited resources like restaurant reservations, ticket purchase opportunities, free trials, or even bandwidth without worrying about a single user flooding the process with thousands of anonymous bots. The same idea could apply to sensitive reputational systems like online forums and polls, where it’s important to prevent automated astroturfing or dogpiling.

  • increasing fidelity of game graphics was actually making games better, or just more expensive

    I really liked what Control did with cranking up the verisimilitude and the photorealism, namely to accentuate the uncanniness and really up the new weird vibe.

  • Maybe it's just me but even the enhanced lighting aspect doesn't look especially good, at least where faces are concerned; shining a hard light sideways so every facial nook and cranny gets highlighted in excruciating detail looks less natural and more like the old android HDR photo filter, even before you realize it's giving some characters instagram make-overs.

  • Probably should've written 'not a deal breaker' instead of not a big deal.

  • It's possible the attempt to shove AI in every nook and cranny in the pentagon didn't especially pan out and since his face was all over that project, he's desperate for a scapegoat.

    Like for sure he'd have had the logistics of the entire US army running smoothly despite layoffs by now, if it weren't for the wokies in anthropic acting up.

  • It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale:

    https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes

    They're just adding another automation tool to a highly automated practice, and using it when it makes sense. Perhaps they won't always choose wisely, but that's normal too. There's plenty of ways that pre-AI automation tools for software development led programmers astray. A skilled, centaur-configured programmer learns from experience which automation tools they should trust, and under which circumstances, and guides themselves accordingly.

    Wow, the whole thing is indefensibly capital-W wrong, just an utterly weird rose-tinted view of the current corporate experience.

  • The one-shotting phenomenon (or how a positive initial experience with the technology seems to lead to a heavily biased view of its merits) should probably be considered a distinct cognitive bias at this point.

    Turns out a lot of bright people can't deal with a technology being utterly subjective in its efficiency, and also how that's specifically the part that reduces it to being so narrowly useful as to force the existential question, given the insane resource burn and the socioeconomic disruption that's part and parcel, even if like Doctorow you think that their rape and pillage of artist's rights and intellectual property in general isn't an especially big deal.

    Also, local LLMs are hardly extricable from the whole mess, they are basically a byproduct, and updated versions only will keep coming as long as their imperial size online counterparts remain a viable concern.

  • In the original post he kept referring to Ollama like it was an LLM instead of a server app that hosts LLMs so I'd say the jury's out on that.

    edit: Also, throughout this piece he keeps equivocating between local LLMs and their behemoth online counterparts with their heavily proprietary tooling that occasionally wraps them into a somewhat useful product.

    I think he assumes that because he can load up a modest speech-to-text model locally and casually transcribe several hours of video resources in somewhat short order (this was apparently his major formative experience with modern AI) it works the same with e.g. coding.

    Like, hey gpt-oss please make sense of these ten thousand lines of context without access to a hundred bespoke MCP intermediaries and one or three functioning RAG systems as I watch the token generation rate slow to a trickle while the context window gradually fills up.

  • Usually, you wake up on a lifeless beach that’s adorned with some sort of abandoned marble temple. It’s supposed to be beautiful, but instead it’s really sad. Almost unbearably sad. So much so that you want to get away from it. So you crawl downward into these vents going below the horrible temple, and suddenly it’s like you’re moving through the innards of an incomprehensible machine that’s thudding away, thud, thud, thud. And as you get deeper, the metal sidings are carved with scrawled ominous curses and slurs directed toward you, and you hear the voices, louder than before, and you somehow know these people are in pain because of you. It keeps getting colder. Color drains from the world. And you see the crowd through the slats of the vents: pale and emaciated men, women, and children from centuries to come, all of them pressed together for warmth in some sort of unending cavern. What clothes they have are torn and ragged. Before you know it, their dirty hands and dirty fingernails lurch through the grates, and they’re reaching for you, tearing at your shirt, moaning terrible things about their suffering and how you made it happen, you made it, and you need to stop this now, now, now. And next they’re ripping you apart, limb from limb, and you are joining them in the gray dimness forever.

  • A potential massive uptick of consumer tier subscribers that they don't break even on at the same time the DoD fallout drives more lucrative prospects away could be fun to watch at least, a considerable chunk of the llm code helper ecosystem appears to hinge on anthropic not doing anything crazy like suddenly hiking prices.

    edit: Aaaand they had a worldwide outage

  • It unthickened, it was just Altman grandstanding while at the same time taking over Antrhopic's DoD DoW: The Everything App contracts.

  • Pentagon labels Anthropic a supply-chain risk, strikes deal with OpenAI whose president Greg Brockman is a Trump mega-donor.

    🍌🍌🍌

    Trump added there would be a six-month phase-out for the Defense Department and other agencies that use the company's products. If Anthropic does not help with the transition, Trump said, he would use "the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."

    The designation could bar tens of thousands of contractors from using Anthropic's AI when working for the Pentagon. That represents an existential threat to its business with the government and could harm its private-sector relationships, said Franklin Turner, an attorney who specializes in government contracts.

    "Blacklisting Anthropic is the contractual equivalent of nuclear war," he said.

  • TechTakes @awful.systems

    Apparently Anthropic may be about to be on the receiving end of some major banana republic shit from the Trump admin -- Update: Anthropic labeled supply chain risk by DoD.

    archive.is /20260226063523/https://www.axios.com/2026/02/25/anthropic-pentagon-blacklist-claude
  • TechTakes @awful.systems

    Peter Thiel Antichrist lecture: We asked guests what the hell it is

    sfstandard.com /2025/09/16/peter-thiel-antichrist-san-francisco/
  • TechTakes @awful.systems

    Albania appoints AI bot as minister to tackle corruption

    www.reuters.com /technology/albania-appoints-ai-bot-minister-tackle-corruption-2025-09-11/
  • NotAwfulTech @awful.systems

    Advent of Code 2024 - Historian goes looking for history in all the wrong places

    adventofcode.com