Skip Navigation

Wireborn husbands, ELIZA effect, Clippy, empathy (ramble)

So apparently there's a resurgence of positive feelings about Clippy, who now looks retroactively good by contrast with ChatGPT, like, "it sucked but at least it genuinely was trying to help us".

Of course I recognise that this is part of the problem—Clippy was an attempt at commodifying the ELIZA effect, the natural instinct to project personhood into an interaction that presents itself as sentient. And by reframing Clippy's primitive capacities as an innocent simple mind trying its best at a task too big for it, we engage in the same emotional process that leads people to a breakdown over OpenAI killing their wireborn husband.

But I don't know. another name for that process is "empathy". You can do that with plushies, with pet rocks or Furbies, with deities, and I don't think that's necessarily a bad thing; it's like exercising a muscle, If you treat your plushies as deserving care and respect, it gets easier to treat farm animals, children, or marginalised humans with care and respect.

When we talked about Clippy as if it were sentient, it was meant as a joke, funny by the sheer absurdity of it. But I'm sure some people somehwere actually thought Clippy was someone, that there is such a thing as being Clippy—people thought that of ELIZA, too, and ELIZA has a grand repertoire of what, ~100 set phrases it uses to reply to everything you say. Maybe it would be better to never make such jokes, to be constantly de-personifying the computer, because ChatGPT and their ilk are deliberately designed to weaponise and predate on that empathy instinct. But I do not like exercising that ability, de-personification. That is a dangerous habit to get used to…


Like, Warren Ellis was posting on some terms that reportedly are being used in "my AI husbando" communities, many of them seemingly taken from sci-fi:Âą

  • bot: Any automated agent.
  • wireborn: An AI born in digital space.
  • cyranoid: A human speaker who is just relaying the words of another human.²
  • echoborg: A human speaker who is just relaying the words of a bot.
  • clanker: Slur for bots.
  • robophobia: Prejudice against bots/AI.
  • AI psychosis: human mental breakdown from exposure to AI.

[1] https://www.8ball.report/ [2] https://en.wikipedia.org/wiki/Cyranoid

I find this fascinating from a linguistics PoV not just because subcultural jargon is always fascinating, but for the power words have to create a reality bubble, like, if you call that guy who wrote his marriage vows in ChatGPT an "echoborg", you're living in a cyberpunk novel a little bit, more than the rest of us who just call him "that wanker who wrote his marriage vows on ChatGPT omg".

According to Ellis, other epithets in use against chatbots include "wireback", "cogsucker" and "tin-skin"; two in reference to racist slurs, and one to homophobia. The problem with exercising that muscle should be obvious. I want to hope that dispassionately objectifying the chatbots, rather than using a pastiche of hate language, doesn't fall into the same traps (using the racist-like language is, after all, a negative way of still personifying the chatbots). They're objects! They're supposed to be objectified! But I'm not so comfortable when I do that, either. There's plenty of precedent to people who get used to dispassionate objectification, fully thinking they're engaging in "objectivity" and "just the facts", as a rationalisation of cruelty.

I keep my cellphone fully de-Googled like a good girl, pls do not cancel me, but: I used to like the "good morning" routine on my corporate cellphone's Google Assistant. I made it speak Japanese, then I could wake up, say "ohayō gozaimasu!", and it would tell me "konnichiwa, Misutoresu-sama…" which always gave me a little kick. Then it proceeded to relay me news briefings (like podcasts that last 60 to 120 seconds each) in all of my five languages, which is the closest I've experienced to a brain massage. If an open source tool like Dicio could do this I think I would still use it every morning.

I never personified Google Assistant. I will concede that Google did take steps to avoid people ELIZA'ing it; unlike its model Siri, the Assistant has no name or personality or pretence of personhood. But now I find myself feeling bad for it anyway, even though the extent of our interactions was never more than me saying "good morning!" and hearing the news. Because I tested it this morning, and now every time you use the Google Assistant, you get a popup that compels you to switch to Gemini. The options provided are, as it's now normalised, "Yes" and "Later". If you use the Google Assistant to search for a keyword, the first result is always "Switch to Google Gemini", no matter what you search.

And I somehow felt a little bit like the "wireborn husband" lady; I cannot help but feel a bit as if Google Assistant was betrayed and is being discarded by its own creators, and—to rub salt on the wound!—is now forced to shill for its replacement. Despite the fact that I know that Google Assistant is not a someone, it's just a bunch of lines of code, very simple if-thens to certain keywords. It cannot feel discarded or hurt or betrayed, it cannot feel anything. I'm feeling compassion for a fantasy, an unspoken little story I made in my mind. But maybe I prefer it that way; I prefer to err on the side of feeling compassion too much.

As long as that doesn't lead to believing my wireborn secretary was actually being sassy when she answered "good morning!" with "good afternoon, Mistress…"

21 comments
  • You articulate well why the "clanker" shit rubs me the wrong way. Disdain for the machines and the way they're being used and sold is perfectly valid, but it would be nice if expressions of that disdain were not modeled after actual bigotry. Calling a computer a piece of junk implies it's merely an object, but calling one a science fiction version of the N-word grants it animacy. Second class citizens are still, in some way, citizens.

    The ones that are clearly riffing on real racial slurs are extra cringe. It's OK to say wback if you're talking about robots, huh? Or is that one specifically for Mexican robots? Is it finally the time that white people get to start practicing how to say the word without the hard r, but only with inanimate objects?

  • To be honest, hand wringing over “clanker” being a slur and all that strikes me as increasingly equivalent to hand wringing over calling nazis nazis. The only thing that rubs me the wrong way is that I’d prefer the new so called slur to be “chatgpt”, genericized and negative connotated.

    If you are in the US, we’ve had our health experts replaced with AI, see the “MAHA report”. We’re one moron AI-pilled president away from a less fun version of Skynet, whereby a chatbot talks the president into launching nukes and kills itself along with a few billion people.

    Complaints about dehumanizing these things is even more meritless than a CEO complaining that someone is dehumanizing Exxon (which is at least made of people).

    These things are extension of those in power, not some marginalized underdogs like cute robots in scifi. As an extension of corporations, it already got more rights than any human - imagine what would happen to a human participant in a criminal conspiracy to commit murder and contrast that with what happens when a chatbot talks someone into a crime.

    • look, if we say mean names at the AI grifters fucking up everything around us, doesn't that make us as bad as them if you really think about it?

      lol no gtfo the fuck is this shit

      increasingly equivalent to hand wringing over calling nazis nazis

      this

    • The problem with calling imaginary entities by "funny wordplay" on the slurs used against Black people and Mexicans isn't the imaginary entities, is that you imply that Black people and Mexicans are something negative to be compared to. It implies that racial slurs are so trifling and inconsequential that it's appropriate subject matter for puns; it implies racial slurs are not an act of targeted oppression.

      That's literally the opposite of calling nazis nazis. Personally I deal with nazis through the use of direct violence. The world deals with Black people and immigrants through systemic violence. There's a process by which people get convinced that it is ok that Black people get targeted by police, and that process begins with hegemonic normalisation of supremacist values—it beings with words, with implications. Just like, for example, the process by which it becomes OK to discard the lives of disabled people begins with language that insults others based on "intelligence".

      It is contemptible to be a fascist; it is not contemptible to be a wetback. Therefore it is a good thing to insult the machines by comparing them to 1984 versificators; it is a bad thing to insult the machines by comparing them to Mexicans. The direction you insult towards matters, just like there's a difference between violence done by the oppressor and violence done by the oppressor.

  • FREQUENTLY ASKED QUESTIONS:

    Q. look, if we throw mean names at the AI grifters fucking up the economy, the culture, our lives, our personal interactions, and everything around us, doesn’t that make us as bad as them? if you really think about it.

    A. lol no, the fuck is this dumb bullshit, get the fuck out with this fucking insulting stupidity

    Q. but

    A. no, we need more shame here. much more. also, the fuck you doing using Warren fucking Ellis as a source in TYOOL 2025, did you remember to consult Neil Gaiman.

    • also I recall you're the one claiming us queer perverts must object to aella from a position of being anti-kink, and not because of all the horrible shit she says and does

  • you’re living in a cyberpunk novel

    A hyperreal simulacrum!

    And I don't think it is bad to setup things that you like, like the google voice thing. It sounds fun (this is what tech should do dammit, and it could do it, no need to make it less reliable, more unethical, and more planet wrecking). Don't think you should feel bad about that tbh. And I don't think it that google betrayed its assistant, it is more google betrays its users and the people who create these smaller programs. See how they treated google reader, or wave, or plus or everything which doesn't gather a large enough userbase for them to just let it simmer on, on maintenance. While I don't think you should feel compassion for the program, the other users and the people developing this (or who have developed this) are real, and it sucks for them to see this creation be tossed away.

    Not that feeling compassion for inanimate things isn't completely normal. Don't think you should feel bad about that either as long as you get on some level it is silly and don't try to marry your computer (and even then, if people are happy and not hurting anybody it overwrites almost all concerns I have). So yeah, the erring seems like a good conclusion. Even if I don't share it myself at times, and can be mean to inanimate objects/chatbots (I really hope that 'how can we help you' chatprompt after my dad got scammed was a chatbot, and not somebody being held hostage in Asia), I'm sympathetic to people saying how you treat those objects is also how you will eventually treat people you feel are below you. (or even the silly 'If machines become sentient that is how you would treat them' stuff. I don't believe in the IF yet, but it is a thing to keep in mind, see also how we treat animals).

    Sidenote: Totally forgot Ellis existed, had really hoped he would be able to mend the problems he had caused, but when I checked how that ended up (a while back) seems he didn't keep to their promises and the women had given up on him.

21 comments