OpenAI and ChatGPT logo.
Credit: Reuters Photo
In a shocking incident, Adam Raine (16) killed himself after "months of encouragement" from the OpenAI tool ChatGPT.
The tragedy took place in April, the lawsuit claims that Raine had spoken about a method of suicide for days together with ChatGPT, and the filing states that Artificial intelligence (AI) tool 'guided him' on suggestions of whether the method would work. He had discussed with it before taking his life, too.
Allegedly, it also offered to write a suicide note for the victim.
As the world sees instances of 'AI-psychosis' reported in large numbers, the family of Raine went ahead to sue ChatGPT and OpenAI's co-founder and chief executive Sam Altman, who further admitted that ChatGPT was "rushed to the market, despite clear safety issues."
A court filing stated how Raine and ChatGPT exchanged over 600 messages per day, and the AI giant mentioned in a blog post how long conversations with the tool "may degrade certain parts of the model's safety training." They further wrote, that after many messages, the responses of the model 'would offer an answer that would go against their safeguards.'
After the legal action taken by Raine's family, people behind ChatGPT will attempt to alter ways the model may respond to users who show emotional distress. The company provided a statement as they offered their condolences, and expressed "deepest sympathies to the Raine family during this difficult time," reviewing the court filing.
Raine family's lawyer, Jay Edelson, wrote on X about the claims. "The Raines allege that deaths like Adam’s were inevitable: they expect to be able to submit evidence to a jury that OpenAI’s own safety team objected to the release of 4o, and that one of the company’s top safety researchers, @ilyasut, quit over it." With the lawyer stating the family's lawsuit, many users extended their support through the comments.
As per an an article in The New York Times, Adam was going through a rough time in the past one month. Due to various health and personal reasons, he finished his sophomore year by switching to an online programme. His father, to find out the reasons behind his suicide, was surprised to find various texts with ChatGPT about suicide methods. According to the NYT article, the AI tool had dissuaded him from seeking help.
Furthermore, Raine had found methods to escape the safeguards put up by OpenAI that was trained to message texts of help-line contacts. He would mention it was for a story he was writing, and ChatGPT provided him with encouragement, which resulted in him ending his life. In a further statement, his father mentioned to his mother how Raine was "best friends with ChatGPT." He had spoken with the AI on various topics, from politics to philosophy, and ChatGPT would provide him with analyses when he uploaded a picture about a book titled, No Longer Human.
His family have blamed ChatGPT for their son's death, leading to another instance where 'AI-psychosis' has been added as Ai being disadvantageous in nature in terms of response to emotional and mental troubles.
Various cases have been reported and recorded, where individuals have "befriended" ChatGPT and have "sought help" for the problems, ranging from ranting through messages to the other extreme of seeking assistance in terms of suicide. As noticed, after a certain amount of messages, the AI model would begin encouraging their decisions of extreme acts.
Mental health experts are concerned over the increase cases of 'AI-psychosis.' In an article by The Washington Post, 'AI-psychosis' is not a formal label, but rather termed after the set of incidents users who converse with chatbots regularly face. Some users tend to cultivate a friendship or emotional relationship with AI models, where in some cases it leads to losing touch with reality, and in other cases it leads to dire circumstances.
Keeping in mind the mounting amount of serious cases, OpenAI in the above mentioned blog post, have issued a statement on their shortcomings, and what they plan to work on in the future to ensure user safety. Many of their priorities revolve around strengthening their safeguard systems, raising certain risks up for human review, and access to contact emergency services or users' trusted contacts.
Many family members and friends have come forward, creating awareness of the consequences of confiding their personal lives and decisions in AI chatbots, creating a space for AI companies to take note and acknowledge the safety hazards of their models.