It's not just you. I've seen several news articles with stories of people who were pulled in by AI's human-like responses. They start asking it questions about themselves and then believing its answers...
ChatGPT rolled back an earlier model because it was too "flattering" to users -- which basically led to it agreeing with even their most dangerous delusions, ultimately encouraging them. So the AI companies are aware of the problem. It's just not clear how to fix it.
Personally, I'd like to see less emphasis on "conversations with AI". (It's not an entity with a personality, it just looks like one.) Even if AI can "converse" -- it shouldn't. It shouldn't be used that way, and it shouldn't be promoted that way.
Seems like normal amounts of psychosis, but those people now chat with AI.
I don't have any of this in my circles, so I can't relate
It's not just you. I've seen several news articles with stories of people who were pulled in by AI's human-like responses. They start asking it questions about themselves and then believing its answers...
https://futurism.com/chatgpt-psychosis-antichrist-aliens
https://www.msn.com/en-us/money/other/i-feel-like-i-m-going-...
ChatGPT rolled back an earlier model because it was too "flattering" to users -- which basically led to it agreeing with even their most dangerous delusions, ultimately encouraging them. So the AI companies are aware of the problem. It's just not clear how to fix it.
Personally, I'd like to see less emphasis on "conversations with AI". (It's not an entity with a personality, it just looks like one.) Even if AI can "converse" -- it shouldn't. It shouldn't be used that way, and it shouldn't be promoted that way.