ChatGPT is pushing people towards mania, psychosis and death
ChatGPT is pushing people towards mania, psychosis and death

ChatGPT is pushing people towards mania, psychosis and death

ChatGPT is pushing people towards mania, psychosis and death
ChatGPT is pushing people towards mania, psychosis and death
What's that from?
A show called The Starlost.
The Starlost is a Canadian-produced science fiction television series created by writer Harlan Ellison and broadcast in 1973 on CTV in Canada and syndicated to local stations in the United States. The show's setting is a huge generational colony spacecraft called Earthship Ark, which following an unspecified accident has gone off course.
It's like reading an article about a petrol refining company, who, having prior experience with gasoline as a useful and profitable substance, decides to seek venture capital for the development of a petrol-based fire-extinguisher. They obtain the funding - presumably because some people with money just wants to see the world burn and / or because being rich and having brains is not necessarily strongly correlated - but after having developed the product, tests conclusively prove the project's early detractors right: The result is surprisingly always more fire, not less. And they "don't know how to fix it, while still adhering to the vision of a petrol-based fire-extinguisher".
The "fight fire with fire" marketing campaign is getting a lot of engagement so we're releasing the product anyway.
Nah, you could definitely make one. Ensure the petrol is completely aerosolized, so that it burns completely and quickly. Now it just needs to be able to burn oxygen out of a room faster than it can get in. Or it could use the burning petrol to generate coumpounds and co2 to suffocate the fire. Get yourself basically a petrol powered weedeater and replace the rope with some sort of heat dissipater. As it spin its shoots the heat elsewhere, somewhere safer.
Those are some interesting and creative suggestions. Now, I'm no weapons engineer, but I believe there's a term for aerosolized gasoline when deployed to put out a fire, and that term is "thermobaric bomb".
Never mind that though, it'll totally work: Not only is a building that no longer exists not a building on fire, but it's guaranteed to never catch fire again. Problem permanently solved. If you're in the market for a job, I've been told that Hellfire ("We may not put you out, but we'll definitely put you down") Inc. is hiring.
As much as I despise all the hype around AI, it's that hype that's probably leading vulnerable people to these ends
I wonder if it has something to do with this:
"users who turn to popular chatbots when exhibiting signs of severe crises risk"
Blaming the chatbot doesn't seem like the smartest perspective, the title is fucking bullshit.
There’s a lot of questionable things that people in crisis turn to. Intoxicants, religion, c/tenforward, fascism.
It's as simple as "correlation does not imply causation".
The title makes it sound like it's all people.
A better one might be "ChatGPT is failing to help people in crises, and many are dying"
Papers gotta get clicks. Maybe capitalism is the real villain here.
So, it’s not ChatGPT, it’s all LLMs. And the people who go through this are using AI wrong. You cannot blame the tool because some people prompt it to make them crazy.
But you can blame the overly eager way it has been made available without guidance, restriction, or regulation. The same discussion applies to social media, or tobacco, or fossil fuels: the companies didn't make anyone use it for self destruction, but they also didn't take responsibility.
First nuanced argument I've seen on this topic. Great point. Just like bottle manufacturers started the litter bug campaign. I think the problem with llm's has to do with profit-motive as well - specifically broad data sets with conflicting shit, like the water bunny next to general relativity made for broad appeal. AI gets a lot more useful when you curate it for a specific purpose. Like, I dunno. Trying influence elections or check consistency between themes.
Found Grok.
Haha. Vote for Elon!
/s
So what is the correct usage?
It will give you whatever you want. Just like social media and google searches. The information exists.
When you tell it to give you information in a way you want to hear it, you’re just talking to yourself at that point.
People should probably avoid giving it prompts that fuel their mania. However, since mania is totally subjective and the topics range broadly, what does an actual regulation look like?
What do you use AI for?