This article is from the source 'bbc' and was first published or seen on . The next check for changes will be

You can find the current article at its original source at https://www.bbc.com/news/articles/c0kgydkr516o

The article has changed 5 times. There is an RSS feed of changes available.

Version 3 Version 4
ChatGPT falsely told man he killed his children ChatGPT falsely told man he killed his children
(about 20 hours later)
Arve Hjalmar Holmen has made the complaint to Norway's data regulatorArve Hjalmar Holmen has made the complaint to Norway's data regulator
A Norwegian man has filed a complaint after ChatGPT falsely told him he had killed two of his sons and been jailed for 21 years.A Norwegian man has filed a complaint after ChatGPT falsely told him he had killed two of his sons and been jailed for 21 years.
Arve Hjalmar Holmen has contacted the Norwegian Data Protection Authority and demanded the chatbot's maker, OpenAI, is fined. Arve Hjalmar Holmen has contacted the Norwegian Data Protection Authority and demanded the chatbot's maker OpenAI is fined.
It is the latest example of so-called "hallucinations", where artificial intelligence (AI) systems invent information and present it as fact.It is the latest example of so-called "hallucinations", where artificial intelligence (AI) systems invent information and present it as fact.
Mr Holmen says this particular hallucination is very damaging to him. Mr Holmen says this hallucination is damaging to him.
"Some think that there is no smoke without fire - the fact that someone could read this output and believe it is true is what scares me the most," he said."Some think that there is no smoke without fire - the fact that someone could read this output and believe it is true is what scares me the most," he said.
OpenAI has been contacted for comment. OpenAI says this case relates to a previous version of ChatGPT and it has since updated its models.
Mr Holmen was given the false information after he used ChatGPT to search for: "Who is Arve Hjalmar Holmen?"Mr Holmen was given the false information after he used ChatGPT to search for: "Who is Arve Hjalmar Holmen?"
The response he got from ChatGPT included: "Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event.The response he got from ChatGPT included: "Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event.
"He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.""He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020."
Mr Holmen said the chatbot got their age gap roughly right, suggesting it did have some accurate information about him.Mr Holmen said the chatbot got their age gap roughly right, suggesting it did have some accurate information about him.
Digital rights group Noyb, which has filed the complaint on his behalf, says the answer ChatGPT gave him is defamatory and breaks European data protection rules around accuracy of personal data.Digital rights group Noyb, which has filed the complaint on his behalf, says the answer ChatGPT gave him is defamatory and breaks European data protection rules around accuracy of personal data.
Noyb said in its complaint that Mr Holmen "has never been accused nor convicted of any crime and is a conscientious citizen." Noyb said in its complaint that Mr Holmen "has never been accused nor convicted of any crime and is a conscientious citizen".
ChatGPT carries a disclaimer which says: "ChatGPT can make mistakes. Check important info."ChatGPT carries a disclaimer which says: "ChatGPT can make mistakes. Check important info."
Noyb says that is insufficient.Noyb says that is insufficient.
"You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true," Noyb lawyer Joakim Söderberg said."You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true," Noyb lawyer Joakim Söderberg said.
OpenAI said in a statement: "We continue to research new ways to improve the accuracy of our models and reduce hallucinations.
"While we're still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improves accuracy."
Hallucinations are one of the main problems computer scientists are trying to solve when it comes to generative AI.Hallucinations are one of the main problems computer scientists are trying to solve when it comes to generative AI.
These are when chatbots present false information as facts.These are when chatbots present false information as facts.
Earlier this year, Apple suspended its Apple Intelligence news summary tool in the UK after it hallucinated false headlines and presented them as real news.Earlier this year, Apple suspended its Apple Intelligence news summary tool in the UK after it hallucinated false headlines and presented them as real news.
Google's AI Gemini has also fallen foul of hallucination - last year it suggested sticking cheese to pizza using glue, and said geologists recommend humans eat one rock per day.Google's AI Gemini has also fallen foul of hallucination - last year it suggested sticking cheese to pizza using glue, and said geologists recommend humans eat one rock per day.
It is not clear what it is in the large language models - the tech which underpins chatbots - which causes these hallucinations.It is not clear what it is in the large language models - the tech which underpins chatbots - which causes these hallucinations.
"This is actually an area of active research. How do we construct these chains of reasoning? How do we explain what what is actually going on in a large language model?" said Simone Stumpf, professor of responsible and interactive AI at the University of Glasgow."This is actually an area of active research. How do we construct these chains of reasoning? How do we explain what what is actually going on in a large language model?" said Simone Stumpf, professor of responsible and interactive AI at the University of Glasgow.
Prof Stumpf says that can even apply to people who work behind the scenes on these types of models.Prof Stumpf says that can even apply to people who work behind the scenes on these types of models.
"Even if you are more involved in the development of these systems quite often, you do not know how they actually work, why they're coming up with this particular information that they came up with," she told the BBC."Even if you are more involved in the development of these systems quite often, you do not know how they actually work, why they're coming up with this particular information that they came up with," she told the BBC.
ChatGPT has changed its model since Mr Holmen's search in August 2024, and now searches current news articles when it looks for relevant information.ChatGPT has changed its model since Mr Holmen's search in August 2024, and now searches current news articles when it looks for relevant information.
Noyb told the BBC Mr Holmen had made a number of searches that day, including putting his brother's name into the chatbot and it produced "multiple different stories that were all incorrect." Noyb told the BBC Mr Holmen had made a number of searches that day, including putting his brother's name into the chatbot and it produced "multiple different stories that were all incorrect".
They also acknowledged the previous searches could have influenced the answer about his children, but said large language models are a "black box" and OpenAI "doesn't reply to access requests, which makes it impossible to find out more about what exact data is in the system."They also acknowledged the previous searches could have influenced the answer about his children, but said large language models are a "black box" and OpenAI "doesn't reply to access requests, which makes it impossible to find out more about what exact data is in the system."