Skip Navigation

User banner
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)B
Posts
57
Comments
774
Joined
2 yr. ago

he/they

  • I feel slightly better about my Pepsi addiction now.

    The Coca-Cola Company is desperately trying to talk up this mediocre demo as the best demo ever. That’s how AI works now — AI companies don’t give you an impressive demo that can’t be turned into a product, they give you a garbage demo and loudly insist it’s actually super cool

    Considering AI supporters' are too artistically blind to tell quality work from slop, I'm gonna chalk that up to them genuinely believing its the best thing since sliced bread.

    Times are tough, the real economy where people live is way down, the recession is biting, and the normal folk know the ones promoting AI want them out of a job. If you push AI, you are the enemy of ordinary people. And the ordinary people know it.

    Damn right, David. Here's to hoping the ordinary people don't forget who the AI pushers were once winter sets in.

  • i think you need to be a little bit more specific unless sounding a little like an unhinged cleric from memritv is what you’re going for

    I'll admit to taking your previous comment too literally here - I tend to assume people are completely serious unless I can clearly tell otherwise.

    but yeah nah i don’t think it’s gonna last this way, people want to go back to just doing their jobs like it used to be, and i think it may be that bubble burst wipes out companies that subsidized and provided cheap genai, so that promptfondlers hammering image generators won’t be as much of a problem. propaganda use and scams will remain i guess

    Scams and propaganda will absolutely remain a problem going forward - LLMs are tailor-made to flood the zone with shit (good news for propagandists), and AI tools will provide scammers with plenty of useful tools for deception.

  • Considering we've already got a burgeoning Luddite movement that's been kicked into high gear by the AI bubble, I'd personally like to see an outgrowth of that movement be what ultimately kicks it off.

    There were already some signs of this back in August, when anti-AI protesters vandalised cars and left "Butlerian Jihad" leaflets outside a pro-AI business meetup in Portland.

    Alternatively, I can see the Jihad kicking off as part of an environmentalist movement - to directly quote Baldur Bjarnason:

    [AI has] turned the tech industry from a potential political ally to environmentalism to an outright adversary. Water consumption of individual queries is irrelevant because now companies like Google and Microsoft are explicitly lined up against the fight against climate disaster. For that alone the tech should be burned to the ground.

    I wouldn't rule out an artist-led movement being how the Jihad starts, either - between the AI industry "directly promising to destroy their industry, their work, and their communities" (to quote Baldur again), and the open and unrelenting contempt AI boosters have shown for art and artists, artists in general have plenty of reason to see AI as an existential threat to their craft and/or a show of hatred for who they are.

  • Part of me wants to see Google actually try this and get publicly humiliated by their nonexistent understanding of physics, part of me dreads the fact it'll dump even more fucking junk into space.

  • That’s quite a remarkable claim. Especially when the actual number of attacks by AI-generated ransomware is zero. [Socket]

    If even a single case pops up, I'd be surprised - AFAIK, cybercriminals are exclusively using AI as a social engineering tool (e.g. voice cloning scams, AI-extruded phishing emails, etcetera). Humans are the weakest part of any cybersec system, after all.

    The paper finishes by recommending “embracing AI in cyber risk management”.

    Given AI's track record on security, that sounds like an easy way to become an enticing target.

  • Probably one part normalisation, one part AI supporters throwing tantrums when people don't treat them like the specialiest little geniuses they believe they are. These people have incredibly fragile egos, after all.

  • TechTakes @awful.systems
    Featured

    Stubsack: weekly thread for sneers not worth an entire post, week ending 9th November 2025

    awful.systems /post/6080044
  • they’ll just heat up a metal heat sink per request and then eject that into the sun

    I know you're joking, but I ended up quickly skimming Wikipedia to determine the viability of this (assuming the metal heatsinks were copper, since copper's great for handling heat). Far as I can tell:

    1. The sun isn't hot enough or big enough to fuse anything heavier than hydrogen, so the copper's gonna be doing jack shit when it gets dumped into the core
    2. Fusing elements heavier than iron loses you energy rather than gaining it, and copper's a heavier element than iron (atomic number of 29, compared to iron's 26), so the copper undergoing fusion is a bad thing
    3. The conditions necessary for fusing copper into anything else only happen during a supernova (i.e. the star is literally exploding)

    So, this idea's fucked from the outset. Does make me wonder if dumping enough metal into a large enough star (e.g. a dyson sphere collapsing into a supermassive star) could kick off a supernova, but that's a question for another day.

  • The question of how to cool shit in space is something that BioWare asked themselves when writing the Mass Effect series, and they came up with some pretty detailed answers that they put in the game's Codex ("Starships: Heat Management" in the Secondary section, if you're looking for it).

    That was for a series of sci-fi RPGs which haven't had a new installment since 2017, and yet nobody's bothering to even ask these questions when discussing technological proposals which could very well cost billions of dollars.

  • It also integrates Stake into your IDE, so you can ruin yourself financially whilst ruining the company's codebase with AI garbage

  • "you can set the sycophancy engines so they aren't sycophancy engines"

    I'll take "Shit that's Impossible" for 500, Alex

  • I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?

    Once the bubble bursts, I expect artificial intelligence as a concept will suffer a swift death, with the many harms and failures of this bubble (hallucinations, plagiarism, the slop-nami, etcetera) coming to be viewed as the ultimate proof that computers are incapable of humanlike intelligence (let alone Superintelligence™). There will likely be a contingent of true believers even after the bubble's burst, but the vast majority of people will respond to the question of "Can machines think?" with a resounding "no".

    AI's usefulness to fascists (for propaganda, accountability sinks, misinformation, etcetera) and the actions of CEOs and AI supporters involved in the bubble (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) will also pound a good few nails into AI's coffin, by giving the public plenty of reason to treat any use of AI as a major red flag.

  • This entire newsstory sounds like the plotline for a rejected Captain Planet episode. What the fuck.

  • Decided to check the Grokipedia "article" on the Muskrat out of morbid curiosity.

    I haven't seen anything this fawning since that one YouTube video which called him, and I quote its title directly, "The guy who is saving the world".

  • I have a nasty feeling there’s a lot of ordinary people who are desperate to throw their money away on OpenAI stock. It’s the AI company! The flagship of the AI bubble! AI’s here to stay, you know! OpenAI? Sure bet!

    Remember when a bunch of people poured their life savings into GameStop and started a financial doomsday cult once they lost everything? That will happen again if OpenAI goes public. (I recommend checking out This Is Financial Advice if you want a deep-dive into the GameStop apes, it is a trip)

    One really bad consequence this deal just opened the gates to is to make it much easier for corporations to gut charities. A proper charity can run very like a business, but it gets a lot of free rides — and it can grow into quite the juicy plum. The California and Delaware decisions on OpenAI are precedents for large investors to come in and drain a charity if they say the right forms of words. I predict that will become a problem.

    ...why do I get the feeling companies are gonna start immediately gutting charities once the bubble pops

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 2nd November 2025

    awful.systems /post/6006438
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 26th October 2025

    awful.systems /post/5930794
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 19th October 2025

    awful.systems /post/5853532
  • TechTakes @awful.systems

    Framework goes full fash, supports Hyprland and Omarchy

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 12th October 2025

    awful.systems /post/5776862
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 28th September 2025 - awful.systems

    awful.systems /post/5621644
  • NotAwfulTech @awful.systems

    Introducing the Forklift Certified License

    aria.dog /barks/forklift-certified-license/
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 21st September 2025

    awful.systems /post/5546334
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 14th September 2025

    awful.systems /post/5468979
  • TechTakes @awful.systems

    The melancholy of history rhyming - Baldur Bjarnason on the fallout of AI

    www.baldurbjarnason.com /2025/the-melancholy-of-history-rhyming/
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 7th September 2025

    awful.systems /post/5394339
  • NotAwfulTech @awful.systems

    You no longer need JavaScript: a showcase of CSS's power

    lyra.horse /blog/2025/08/you-dont-need-js/
  • NotAwfulTech @awful.systems

    We should rethink how we teach people to code

    deadsimpletech.com /blog/notes_teaching_coding
  • NotAwfulTech @awful.systems

    I designed my own ridiculously fast game streaming video codec – PyroWave

    themaister.net /blog/2025/06/16/i-designed-my-own-ridiculously-fast-game-streaming-video-codec-pyrowave/
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 31st August 2025 - awful.systems

    awful.systems /post/5317207
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 24th August 2025

    awful.systems /post/5244605
  • TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 10th August 2025

    awful.systems /post/5099874
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 27th July 2025

    awful.systems /post/4954578
  • NotAwfulTech @awful.systems

    Godot Showcase - Dogwalk

    godotengine.org /article/godot-showcase-dogwalk/