ChatGPT accused of saying an innocent man murdered his children

A Norwegian man, Arve Hjalmar Holmen, has filed a privacy complaint against OpenAI, alleging that ChatGPT falsely described him as a convicted murderer currently serving time in a Norwegian prison. The AI mixed real details such as his hometown and children's information with these fabricated claims. The Austrian advocacy group Noyb has taken up Holmen's cause, filing the complaint with Norway's Datatilsynet. They accuse OpenAI of violating the European Union's General Data Protection Regulation (GDPR), demanding the removal of defamatory content and improvements to the AI model to prevent such inaccuracies.
This incident underscores the broader implications of AI-generated information accuracy and the responsibilities of companies like OpenAI under data protection laws like the GDPR. The case also highlights ongoing concerns about AI's potential to disseminate false information, despite disclaimers about possible inaccuracies. Noyb's actions reflect increasing scrutiny of AI technologies and the demand for mechanisms to address errors swiftly, emphasizing the importance of transparency and accountability in AI systems.
RATING
The article effectively addresses a significant issue concerning AI, privacy, and misinformation. It is well-researched and timely, providing a clear account of the complaint against OpenAI and the implications under GDPR. While it presents a compelling narrative, the lack of a detailed response from OpenAI is a notable omission. The story is accessible and engaging, with the potential to influence public discourse on AI ethics and regulation. Overall, it is a well-rounded piece that highlights the complexities of balancing technological innovation with privacy rights.
RATING DETAILS
The story presents a factual account of a privacy complaint filed against OpenAI by a Norwegian individual, supported by an advocacy group. The main claims are consistent with the reported facts, such as the false description by ChatGPT and the GDPR violation allegations. However, some details, like the exact date of the initial query, are missing, which affects the story's precision. The story accurately depicts the GDPR requirements and Noyb's demands for rectification, aligning well with verified information.
The story primarily focuses on the perspective of the complainant and the advocacy group Noyb, which is understandable given the nature of the complaint. However, it lacks a detailed counter-perspective from OpenAI, which would have provided a more balanced view. The inclusion of OpenAI's disclaimer about potential inaccuracies is a step towards balance, but further insights into their response would enhance this dimension.
The article is well-structured and uses clear, concise language to convey the main points. The logical flow from the complaint to the GDPR implications and Noyb's demands is easy to follow. The use of quotes from Noyb's lawyer adds clarity to the legal aspects of the story. Overall, the article is accessible and understandable to a general audience.
The article relies on credible sources, including statements from Noyb and references to GDPR regulations, which are authoritative on the subject. The involvement of a known advocacy group adds to the credibility. However, direct quotes or statements from OpenAI would improve the source quality by providing a more comprehensive view of the issue.
The story provides a clear account of the complaint and the legal framework involved, but it lacks transparency regarding the methodology used to verify the claims. Details about the initial query's timing are redacted, which limits transparency. More information on how the allegations were verified would enhance the story's transparency.
Sources
- https://boingboing.net/2025/03/20/chatgpt-faces-privacy-violation-for-inventing-a-horrific-murder-case-about-an-innocent-man.html
- https://www.ndtv.com/world-news/case-against-chatgpt-over-false-horror-story-about-norwegian-man-7971011
- https://www.engadget.com/ai/chatgpt-reportedly-accused-innocent-man-of-murdering-his-children-120057654.html
- https://www.theregister.com/2025/03/20/chatgpt_accuses_man_of_murdering/
- https://noyb.eu/en/ai-hallucinations-chatgpt-created-fake-child-murderer
YOU MAY BE INTERESTED IN

It seems like most Windows users don't care for Copilot
Score 7.0
OpenAI's Deep Research tool is coming to free accounts
Score 8.6
OpenAI rolls out a ‘lightweight’ version of its ChatGPT deep research tool
Score 6.8
OpenAI makes its upgraded image generator available to developers
Score 7.2