Ask HN: What's Going on with AI Psychosis?
I am seeing more and more people spiraling in their personal lives that aren't doing great, and turning to ChatGPT for comfort, insight, or investigation and it feels like so many of them are slowly spiraling into some sort of psychosis, and I see it continuously on social media seeing new people fully parting from reality to believe what the computer tells them.
It feels like we've achieved something thought impossible, and in 3 years it's spiraled into 75% 'do my work for me', 15% personal issues (and a subset going into this delusional group), and 10% gooning...
I'm surprised we aren't starting to see more push on the major labs in trying to fight this problem... We got a mention of it with sycophancy in the GPT-5 announcement, but not much else really.
It feels like this is going to be an increasing problem, but I don't even know where to start with getting my friends and family to even think twice about what comes out of the machine.
Seems like normal amounts of psychosis, but those people now chat with AI.
I don't have any of this in my circles, so I can't relate
It's not just you. I've seen several news articles with stories of people who were pulled in by AI's human-like responses. They start asking it questions about themselves and then believing its answers...
https://futurism.com/chatgpt-psychosis-antichrist-aliens
https://www.msn.com/en-us/money/other/i-feel-like-i-m-going-...
ChatGPT rolled back an earlier model because it was too "flattering" to users -- which basically led to it agreeing with even their most dangerous delusions, ultimately encouraging them. So the AI companies are aware of the problem. It's just not clear how to fix it.
Personally, I'd like to see less emphasis on "conversations with AI". (It's not an entity with a personality, it just looks like one.) Even if AI can "converse" -- it shouldn't. It shouldn't be used that way, and it shouldn't be promoted that way.