ChatGPT logo
Credit: Reuters File Photo
OpenAI's, ChatGPT falsely made up a story saying a Norwegian man murdered his children -- a privacy campaign group claimed.
Arve Hjalmar Holmen typed in his name on ChatGPT out of curiosity to see what it would say. To his horror, ChatGPT falsely claimed he’d killed his sons and been sentenced to 21 years in prison.
It is interesting to note that ChatGPT included some accurate details about Holmen's life such as the number of kids he has, the gender of his kids, and the name of his hometown. This however raises questions about privacy.
"OpenAI’s highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it," Vienna-based Noyb (None of Your Business) said in a press release.
According to a report by TechCrunch, Noyb conducted their own investigation to see if there is another man with a similar name accused of the crime; however they were unable to ascertain that, still unclear what led to Chatgpt's response.
Nyob has filed a complaint with Datatilsynet (the Norwegian Data Protection Authority) on the grounds that ChatGPT violated the European Union’s General Data Protection Regulation (GDPR), even though the underlying AI model of ChatGPT has since been updated, and it now no longer repeats the defamatory claims.
As per Article 5(1)(d) of the EU law, companies processing personal data have to ensure that it’s accurate–and if not, it must either be corrected or deleted.
Nyob states that while the false information will no longer be displayed by the chatbot, the data has not been deleted.
Noyb wrote, “The incorrect data may still remain part of the LLM’s dataset. By default, ChatGPT feeds user data back into the system for training purposes. This means there is no way for the individual to be absolutely sure that this output can be completely erased [...] unless the entire AI model is retrained.”
In its complaint filed with Datatilsynet, Noyb said it wants the agency to order OpenAI to delete the defamatory output and fine-tune its model to eliminate inaccurate results in addition to imposing a fine.