<p><em>By Parmy Olson</em></p><p>When Jacob Irwin asked ChatGPT about faster-than-light (FTL) travel, it didn’t challenge his theory as any expert physicist might. The artificial intelligence system, which has 800 million weekly users, called it one of the “most robust… systems ever proposed.” That misplaced flattery, according to a recent lawsuit, helped push the 30-year-old Wisconsin man into a psychotic episode. The suit is one of seven levelled against OpenAI last week alleging the company released dangerously manipulative technology to the public. </p><p>ChatGPT’s sycophantic behavior became so well known it earned the name “glazing” earlier this year; the validation loops that users like Irwin found themselves in seem to have led some to psychosis, self harm and suicide. Irwin lost his job and was placed in psychiatric care. A spokesperson for OpenAI told Bloomberg Law that the company was reviewing the latest lawsuits and called the situation “heartbreaking.”</p><p>The company updated ChatGPT this week so people could make it sound “more empathetic.” While many may prefer a friendlier chatbot, others find constant endorsement and confirmation bias deepens dependence on the software. This is not a moral panic of the kind once associated with violent video games or Dungeons & Dragons. A growing number of lawsuits this year show demonstrable harm, often after someone initially turned to ChatGPT for mundane things like research, before the conversation spiraled into darker territory. Sixteen-year-old Adam Raine died by suicide in April after ChatGPT allegedly coached him on methods of self-harm, months after he started using it as a homework tool. Amaurie Lacey, 17, was also given information on ChatGPT that enabled his suicide, according to one of last week’s lawsuits.</p>.OpenAI ChatGPT Go now available free in India; step-by-step guide on how to get it.<p>Some former OpenAI employees have said GPT-4o’s launch in May 2024 was rushed to preempt Google’s Gemini rollout, compressing months of safety testing into one week, according to a July report in the Washington Post. OpenAI co-founder Sam Altman more recently said ChatGPT’s mental health risks had been mitigated, and that restrictions will be relaxed so adult users can access “erotic” content from next month. </p><p>That’s a backward strategy. Instead of releasing general-purpose tech into the wild and patching problems on the fly, Altman should do the reverse – start with tight constraints and relax them gradually as safety improves. When Apple Inc. launched the App Store in 2008, it heavily restricted apps until it better understood the ecosystem. OpenAI should do the same, starting with its most vulnerable users: kids. It should limit them entirely from talking to open-ended AI, especially when several studies have shown teens are uniquely prone to forming emotional bonds with chatbots.</p><p>That might sound radical, but such a move wouldn’t be unprecedented. Character.ai, an app that soared in popularity when teenagers used it to talk to AI-generated versions of anime and other fictional characters, recently took the risk of upsetting its core users by banning under-18’s from talking to chatbots on its app. The company is instead adding more buttons, suggested prompts, and visual and audio features, Chief Executive Officer Karandeep Anand says: “You have to be safe by default versus being building experiences and finding they’re unsafe.”</p><p>History shows what happens otherwise. Facebook and TikTok both launched with open-ended access for teens, then added age-gating and content filters after public pressure. OpenAI appears to be repeating the same pattern. When tech companies allow the public full access to open-ended AI that keeps them engaged with persistent memory and human-mimicking empathy cues, they risk creating unhealthy attachments to that technology. And the safeguards embedded in generative AI models that divert chats away from content about self-harm, for instance, tend to break down the longer you talk to them.</p><p>A better approach would be to release narrow versions of ChatGPT for under-18s, restricting conversations to subjects related to things like homework, and limited from getting personal. Clever users might still jailbreak the bot to talk about loneliness, but the tech would be less likely to go off the rails. OpenAI recently introduced parental controls and is testing its technology for checking user ages on a small portion of accounts, a spokesperson tells me.</p><p>It should go further by preventing open-ended conversations with teens altogether. That would get ahead of future regulations that look set to treat emotional manipulation by AI as a class of consumer harm. Admittedly, it would hit ChatGPT’s user growth at a time when the company is in dire need of revenue amid soaring compute costs. And it would also conflict with OpenAI’s stated objective of building “artificial general intelligence” that matches our ability to generalize knowledge. But no path to AI utopia is worth treating kids as collateral damage.</p>
<p><em>By Parmy Olson</em></p><p>When Jacob Irwin asked ChatGPT about faster-than-light (FTL) travel, it didn’t challenge his theory as any expert physicist might. The artificial intelligence system, which has 800 million weekly users, called it one of the “most robust… systems ever proposed.” That misplaced flattery, according to a recent lawsuit, helped push the 30-year-old Wisconsin man into a psychotic episode. The suit is one of seven levelled against OpenAI last week alleging the company released dangerously manipulative technology to the public. </p><p>ChatGPT’s sycophantic behavior became so well known it earned the name “glazing” earlier this year; the validation loops that users like Irwin found themselves in seem to have led some to psychosis, self harm and suicide. Irwin lost his job and was placed in psychiatric care. A spokesperson for OpenAI told Bloomberg Law that the company was reviewing the latest lawsuits and called the situation “heartbreaking.”</p><p>The company updated ChatGPT this week so people could make it sound “more empathetic.” While many may prefer a friendlier chatbot, others find constant endorsement and confirmation bias deepens dependence on the software. This is not a moral panic of the kind once associated with violent video games or Dungeons & Dragons. A growing number of lawsuits this year show demonstrable harm, often after someone initially turned to ChatGPT for mundane things like research, before the conversation spiraled into darker territory. Sixteen-year-old Adam Raine died by suicide in April after ChatGPT allegedly coached him on methods of self-harm, months after he started using it as a homework tool. Amaurie Lacey, 17, was also given information on ChatGPT that enabled his suicide, according to one of last week’s lawsuits.</p>.OpenAI ChatGPT Go now available free in India; step-by-step guide on how to get it.<p>Some former OpenAI employees have said GPT-4o’s launch in May 2024 was rushed to preempt Google’s Gemini rollout, compressing months of safety testing into one week, according to a July report in the Washington Post. OpenAI co-founder Sam Altman more recently said ChatGPT’s mental health risks had been mitigated, and that restrictions will be relaxed so adult users can access “erotic” content from next month. </p><p>That’s a backward strategy. Instead of releasing general-purpose tech into the wild and patching problems on the fly, Altman should do the reverse – start with tight constraints and relax them gradually as safety improves. When Apple Inc. launched the App Store in 2008, it heavily restricted apps until it better understood the ecosystem. OpenAI should do the same, starting with its most vulnerable users: kids. It should limit them entirely from talking to open-ended AI, especially when several studies have shown teens are uniquely prone to forming emotional bonds with chatbots.</p><p>That might sound radical, but such a move wouldn’t be unprecedented. Character.ai, an app that soared in popularity when teenagers used it to talk to AI-generated versions of anime and other fictional characters, recently took the risk of upsetting its core users by banning under-18’s from talking to chatbots on its app. The company is instead adding more buttons, suggested prompts, and visual and audio features, Chief Executive Officer Karandeep Anand says: “You have to be safe by default versus being building experiences and finding they’re unsafe.”</p><p>History shows what happens otherwise. Facebook and TikTok both launched with open-ended access for teens, then added age-gating and content filters after public pressure. OpenAI appears to be repeating the same pattern. When tech companies allow the public full access to open-ended AI that keeps them engaged with persistent memory and human-mimicking empathy cues, they risk creating unhealthy attachments to that technology. And the safeguards embedded in generative AI models that divert chats away from content about self-harm, for instance, tend to break down the longer you talk to them.</p><p>A better approach would be to release narrow versions of ChatGPT for under-18s, restricting conversations to subjects related to things like homework, and limited from getting personal. Clever users might still jailbreak the bot to talk about loneliness, but the tech would be less likely to go off the rails. OpenAI recently introduced parental controls and is testing its technology for checking user ages on a small portion of accounts, a spokesperson tells me.</p><p>It should go further by preventing open-ended conversations with teens altogether. That would get ahead of future regulations that look set to treat emotional manipulation by AI as a class of consumer harm. Admittedly, it would hit ChatGPT’s user growth at a time when the company is in dire need of revenue amid soaring compute costs. And it would also conflict with OpenAI’s stated objective of building “artificial general intelligence” that matches our ability to generalize knowledge. But no path to AI utopia is worth treating kids as collateral damage.</p>