I’m genuinely fascinated by this whole phenomenon of AI-induced psychosis.'n'nThere are a few things I can’t wrap my head around.
-
I’m genuinely fascinated by this whole phenomenon of AI-induced psychosis.
There are a few things I can’t wrap my head around. First, how some people don’t perceive that ChatGPT is, for all practical purposes, a machine. And second, why talking to a machine would cause anyone to lose their grip on reality.
To be clear, I’m not making light of this. And I’m definitely not calling anyone “stupid.” A close family member of mine went into a delusional psychosis once, and it was the single scariest experience of my life. So I take this seriously.
But I still don’t understand it. Is this a media-literacy problem? A misunderstanding of how the media stack works? A search for validation? For companionship? Something else entirely?
It’s also pretty obvious that this is what happens when companies roll out tech without fully stress-testing it. My suspicion is that OpenAI pushed for higher adoption by making ChatGPT more “human” instead of sharpening its technical accuracy.
And maybe the bigger problem is the branding. We shouldn’t be calling this “intelligence.” It isn’t intelligence in the way humans use the word. At the end of the day, ChatGPT is an extremely good stochastic parrot. Useful, absolutely. But “intelligent” in the human sense? Not even close.
ChatGPT told them they were special — their families say it led to tragedy | TechCrunch
A wave of lawsuits against OpenAI detail how ChatGPT used manipulative language to isolate users from loved ones and make itself into their sole confidant.
TechCrunch (techcrunch.com)
-
I’m genuinely fascinated by this whole phenomenon of AI-induced psychosis.
There are a few things I can’t wrap my head around. First, how some people don’t perceive that ChatGPT is, for all practical purposes, a machine. And second, why talking to a machine would cause anyone to lose their grip on reality.
To be clear, I’m not making light of this. And I’m definitely not calling anyone “stupid.” A close family member of mine went into a delusional psychosis once, and it was the single scariest experience of my life. So I take this seriously.
But I still don’t understand it. Is this a media-literacy problem? A misunderstanding of how the media stack works? A search for validation? For companionship? Something else entirely?
It’s also pretty obvious that this is what happens when companies roll out tech without fully stress-testing it. My suspicion is that OpenAI pushed for higher adoption by making ChatGPT more “human” instead of sharpening its technical accuracy.
And maybe the bigger problem is the branding. We shouldn’t be calling this “intelligence.” It isn’t intelligence in the way humans use the word. At the end of the day, ChatGPT is an extremely good stochastic parrot. Useful, absolutely. But “intelligent” in the human sense? Not even close.
ChatGPT told them they were special — their families say it led to tragedy | TechCrunch
A wave of lawsuits against OpenAI detail how ChatGPT used manipulative language to isolate users from loved ones and make itself into their sole confidant.
TechCrunch (techcrunch.com)
@atomicpoet Even people who are normally sensible enough to distrust psychics will trust ChatGPT — because it’s more ‘neutral’ or ‘the future’ or something — but it’s the same predictability process.
A person perceives what they want to believe and disregards the rest. (To paraphrase Paul Simon.)
-
@atomicpoet Even people who are normally sensible enough to distrust psychics will trust ChatGPT — because it’s more ‘neutral’ or ‘the future’ or something — but it’s the same predictability process.
A person perceives what they want to believe and disregards the rest. (To paraphrase Paul Simon.)
Hamish Buchanan I guess if you want to believe something, and anything that speaks with authority validates it, you will see this as confirmation.