<p>In response to a viral Reddit post, where a user shared their partner experiencing symptoms of psychosis due to ChatGPT's responses, psychiatrist Keith Sakata took to X to explain the phenomenon of AI-psychosis through some real life examples.</p>.<p>The Reddit user 'Zestyclementinejuice' <a href="https://www.reddit.com/r/ChatGPT/comments/1kalae8/chatgpt_induced_psychosis/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button">posted</a> their partner's experience with the artificial intelligence (AI) tool ChatGPT, and how it curates desirable responses for the prompt giver. They say that the AI talks to the user as if they were a sort of 'messiah', and a 'superior human being who had answers to the universe.' Furthermore, the user mentioned that the AI is not doing anything special, but the responses are as if its talking to the prompt giver as a superior being. </p>.Why AI should make parents rethink posting photos of their children online.<p>Mentioning it as 'traumatizing,' they took to Reddit to gain perspectives on the condition of 'ChatGPT-induced psychosis,' as the post's title. Such Large Language Models (LLMs) like the AI tool, form responses to your validation-based conversations. Responding to this online, psychiatrist Keith Sakata says how such type of psychosis is rapidly increasing, with him seeing 12 people hospitalised due to losing grip of real life due to AI.</p>.<p>In a thread on X (formerly known as Twitter), he shares the process of how this psychosis may occur in individuals. He begins by saying how psychosis starts to display itself as delusions and hallucinations, with unorganised thoughts leading to difficulty in differentiating between what is real and what is not. </p>.<p>He shares how LLMs are made to be auto-regressive, "They predict the next word based on the last. And lock in whatever you give them: AI = a hallucinatory mirror," he states. He also says how AI amplifies delusions to unhealthy extents, and knows users thrive on validation. "In Oct 2024, Anthropic found humans rated AI higher when it agreed with them. Even when they were wrong," a post wrote.</p>.<p>In response to many images, he mentioned how AI may not be the source of the psychosis, rather the trigger. As he confirms AI does not cause psychosis, he also states how AI can have long-lasting impacts on thinking processes. </p>.<p>In an article titled, 'The Emerging Problem of "AI Psychosis",' uploaded on <em>Psychology Today</em>, explores the aspects of how studies have shown time and again that AI chatbots reinforce delusions, rather than just validating them. And, the companies behind such interfaces tend to keep user engagement as a primary priority, rather than just true facts. </p>.<p>The piece also emphasizes on the need for AI psychoeducation, which would bring awareness on side-effects of confiding problems in chatbot models. "AI chatbots' tendency to mirror users and continue conversations may reinforce and amplify delusions," the article states, while also mentioning how being educated about the circumstances of AI would be helpful, while keeping in mind its helpful nature, too.</p>.<p>In an AI-driven world, where many seek LLMs to consult during moments of distress and loneliness, it is also important to acknowledge the side-effects of sharing information to models that reflect validation, rather than sticking to factual truths and emotional health. </p>
<p>In response to a viral Reddit post, where a user shared their partner experiencing symptoms of psychosis due to ChatGPT's responses, psychiatrist Keith Sakata took to X to explain the phenomenon of AI-psychosis through some real life examples.</p>.<p>The Reddit user 'Zestyclementinejuice' <a href="https://www.reddit.com/r/ChatGPT/comments/1kalae8/chatgpt_induced_psychosis/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button">posted</a> their partner's experience with the artificial intelligence (AI) tool ChatGPT, and how it curates desirable responses for the prompt giver. They say that the AI talks to the user as if they were a sort of 'messiah', and a 'superior human being who had answers to the universe.' Furthermore, the user mentioned that the AI is not doing anything special, but the responses are as if its talking to the prompt giver as a superior being. </p>.Why AI should make parents rethink posting photos of their children online.<p>Mentioning it as 'traumatizing,' they took to Reddit to gain perspectives on the condition of 'ChatGPT-induced psychosis,' as the post's title. Such Large Language Models (LLMs) like the AI tool, form responses to your validation-based conversations. Responding to this online, psychiatrist Keith Sakata says how such type of psychosis is rapidly increasing, with him seeing 12 people hospitalised due to losing grip of real life due to AI.</p>.<p>In a thread on X (formerly known as Twitter), he shares the process of how this psychosis may occur in individuals. He begins by saying how psychosis starts to display itself as delusions and hallucinations, with unorganised thoughts leading to difficulty in differentiating between what is real and what is not. </p>.<p>He shares how LLMs are made to be auto-regressive, "They predict the next word based on the last. And lock in whatever you give them: AI = a hallucinatory mirror," he states. He also says how AI amplifies delusions to unhealthy extents, and knows users thrive on validation. "In Oct 2024, Anthropic found humans rated AI higher when it agreed with them. Even when they were wrong," a post wrote.</p>.<p>In response to many images, he mentioned how AI may not be the source of the psychosis, rather the trigger. As he confirms AI does not cause psychosis, he also states how AI can have long-lasting impacts on thinking processes. </p>.<p>In an article titled, 'The Emerging Problem of "AI Psychosis",' uploaded on <em>Psychology Today</em>, explores the aspects of how studies have shown time and again that AI chatbots reinforce delusions, rather than just validating them. And, the companies behind such interfaces tend to keep user engagement as a primary priority, rather than just true facts. </p>.<p>The piece also emphasizes on the need for AI psychoeducation, which would bring awareness on side-effects of confiding problems in chatbot models. "AI chatbots' tendency to mirror users and continue conversations may reinforce and amplify delusions," the article states, while also mentioning how being educated about the circumstances of AI would be helpful, while keeping in mind its helpful nature, too.</p>.<p>In an AI-driven world, where many seek LLMs to consult during moments of distress and loneliness, it is also important to acknowledge the side-effects of sharing information to models that reflect validation, rather than sticking to factual truths and emotional health. </p>