Skip Navigation

Posts
7
Comments
540
Joined
3 yr. ago

  • I feel a bit regretful sometimes that none of my copious fanfic output has inspired anyone to draw fanart. But at least no one has gotten weird about it, either.

  • Ziz has always had a tendency to express her ideas through metaphors in fiction that are familiar to her. We spoke at length about Contessa and Doctor Mother from Worm; the Wardens from World of Warcraft; Frisk, Sans, and especially Undyne from Undertale; Tassadar from Starcraft; Harry and Dumbledore from HPMOR; Iji.

    Does "read a second book" apply here, or is this a "read a first book" situation?

  • He showed up at my blog to talk math back in the ancient days when I had a comment section. And he's active on Mastodon a fair bit.

  • This is a serious blow to coolness, from which not even drugs will easily recover.

  • When we're all too tired for "let them fight".

  • The third derivative... ah yes, the jerk

  • The Golgafrinchans shipped off the B Ark and then died of a plague (book 2). The Shoe Event Horizon happened on Brontitall (radio series) or Frogstar World B (book 2).

  • And it's not like Orwell wrote a book about talking animals that is required reading in schools across the land.

  • Impenetrable layers of posts for which the prequisites include a BFSM fanfic, written in the style of forum threads, based on an offshoot of Homework: The Game.

  • Ah, so it's Mythos that will create the nanobotsdiamondoid bacteria

  • It would be beneath my dignity as a childhood reader of Heinlein and Orwell

    Life is too short to be that pompous

  • There is no cost-cutting in Ba Sing Se

  • All their doom scenarios are made-up sci-fi bullshit, so of course they have free rein to pontificate about the right and wrong ways to prevent them. And because they are high on their own sci-fi, they downplay or neglect or misunderstand the real harms of the rising slop sea. Consequently, they fail to grasp the real social reaction to acts of violence.

  • The only people I trust as little as I trust the owners of corporate social media are the politicians who have decided to cash in on the moment by "regulating" them. I mean, here in progressive Massachusetts, the state house of representatives just this week passed a bill that, depending on the whims of the Attorney General, would require awful.systems to verify the ages of its users by gathering their government-issued IDs or biometrics. We are, you see, a "public website, online service, online application or mobile application that displays content primarily generated by users and allows users to create, share and view user-generated content with other users". And so we would have to "implement an age assurance or verification system to determine whether a current or prospective user on the social media platform" is 16 or older. (Or 14 or 15 with parental consent, but your humble mods lack the resources to parse divorce laws in all localities worldwide, sort out issues of disputed guardianship, etc., etc.) The meaning of what "practicable" age verification is supposed to be would depend upon regulations that the Attorney General has yet to write.

    So, yeah, as an old-school listserv nerd who had the I am not on Facebook T-shirt 15 years ago, I don't trust any of these people.

  • This actually gives me hope that we can poison the datasets pertaining to any sufficiently narrow technical topic.

  • "Scientists invented a fake disease. AI told people it was real"

    https://www.nature.com/articles/d41586-026-01100-y

    But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.

    The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

    The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

  • NotAwfulTech @awful.systems

    Random Positivity Thread: Happy Food Memories

  • NotAwfulTech @awful.systems

    Coordinating a post-Calibre path forward

    wandering.shop /@xgranade/115680412493693277
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 5th October 2025 - awful.systems

    awful.systems /post/5699944
  • TechTakes @awful.systems

    Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 20th July 2025 - awful.systems

    awful.systems /post/4885338
  • TechTakes @awful.systems

    Credulous coverage of AI slop on Wikipedia

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 23 February 2025

    awful.systems /post/3491424
  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 26th January 2025 - awful.systems - awful.systems

    awful.systems /post/3278715