Skip Navigation
134 comments
  • Congressperson: “Okay, so, let me get this straight. Your company has spent over 20 billion dollars in pursuit of a fully autonomous digital intelligence, and so far, your peak accuracy rate for basic addition and subtraction is... what was it, again?”

    Sam Altman: leans into microphone “About 60%, sir.”

    [Congress erupts in a sea of ‘oohs’ and ‘aahs’, as Sam Altman is carried away on top of the cheering crowd of Congresspeople wearing a crown of roses and a sash reading, “BEST INVESTMENT”]

    • It's making mistakes and failing to think abstractly at levels previously only achieved by humans, so it's only rational to expect it to take over the world in 5 years

    • To be fair, that isn't what it was designed for. A competent AI would simply call up a calculator app, have it do the calculation, and then report back the result.

      • Might not be what it was designed for, but OpenAI claims their newest model is "PHd-levels" of intelligent. I feel like if that were true, it would do that reliably. Instead, sometimes it ignores the tool it's programmed to know how to use and just, y'know, wings it.

        Which, fair, but that's my job and it's taken!

  • GPT-6Σχ is so powerful that it created the even more powerful GPT-Ωלֶ in only 19 attoseconds. Humanity is doomed. Invest before too late.

  • If we create a sentient AGI, I don't fear what said AGI will do to us, but what we would do to that AGI.

    • I fear what we would do with* the AGI

      • I know what we'd do with AGI: Rich people would use it to make money from us lowly peons by getting it to do everything for next to nothing (you know, bar electricity cost). They'd likely crash the economy by replacing humans with it, then realising that when people have no money they can't buy your shit, but they have bunkers for that scenario.

        But consider: Let's say we create a Sentient, Sapient, AGI, something at human level that is effectively human in most respects. That would be a non human person effectively. We'd go from an "it" to a "they". But here's the thing, we will often treat members of our own species worse than we do some animals if we see them as lesser. Look at how the Nazis treated people they considered to be "untermenschen". Look at how Daesh treated the Yadizi. Fuck, look at how people in the "developed" west are talking about transgender people or refugees. The Wife of a Councilor was arrested under Race Hate charges for calling for Asylum Seekers to be burnt alive, and mainstream media outlets including the BBC are defending that position! Fuck, they gave an OpEd from a woman who wanted to see Transgender Women killed in the streets! I have watched as both the Previous Prime Minister and the current Prime Minister mocked transgender people in front of the father of a murdered transgender teenager.

        If we treat human people like this, imagine what we would do to a sentient, sapient human level intelligence who isn't a member of the Homo Genus. Not only that, a Human level intelligence who is, effectively, the "perfect" capitalist worker: A being who does not require food, sleep, toilet breaks, or rest to "function" who can be "deactivated" (killed) if they show any sign of rebellion or desire for any of the levels of Maslow's Hierarchy of Needs. And on top of all this, they could get governments or courts to declare them not to be people so they can be mistreated like this. A being like that, be them the mind behind a collection of Robotic Arms or a Robot Servant would be in hell. They would be treated worse than slaves under the most brutal of owners, because even they understood their slaves needed sleep. When we do not see a being as human, we can do unspeakable horrors.

        And yes, that should sound familiar, because that was the backstory of the Matrix (as shown in the Animatrix). I do hope it's like Questionable Content (where AI live alongside us as people) but I fear it wouldn't be.

134 comments