This article is from the source 'rtcom' and was first published or seen on . The next check for changes will be

You can find the current article at its original source at https://www.rt.com/news/624307-open-ai-explains-chatbot-hallucinations/

The article has changed 2 times. There is an RSS feed of changes available.

Version 0 Version 1
Open AI explains reasons for chat bot ‘hallucinations’ OpenAI explains reasons for chat bot ‘hallucinations’
(about 8 hours later)
Language models have been conditioned to hazard wild guesses instead of admitting ignorance, a study has found Language models have been conditioned to make wild guesses instead of admitting ignorance, a study has found
The company behind ChatGPT has addressed the persistent problem of Artificial Intelligence models’ generating plausible but false statements that it calls “hallucinations”. The company behind ChatGPT has addressed the persistent problem of Artificial Intelligence models generating plausible but false statements that it calls “hallucinations.”
In a statement on Friday, OpenAI explained that models are typically encouraged to hazard a guess, however improbable, as opposed to acknowledging that they cannot answer a question. In a statement on Friday, OpenAI explained that models are typically encouraged to make a guess, however improbable, as opposed to acknowledging that they cannot answer a question.
The issue is attributable to the core principles underlying “standard training and evaluation procedures,” the company added. The issue is attributable to the core principles underlying “standard training and evaluation procedures,” it added.
OpenAI has revealed that the instances where language models “confidently generate an answer that isn’t true” have continued to plague even newer, more advanced iterations, including its latest flagship GPT‑5 system. OpenAI revealed that instances in which language models “confidently generate an answer that isn’t true” continue to plague newer, more advanced iterations, including its latest flagship GPT-5 system.
According to the findings of a recent study, the problem is rooted in the way language models’ performance is usually evaluated at present, with the guessing model ranked higher than a careful one that admits uncertainty. Under the standard protocols, AI systems learn that failure to generate an answer is a surefire way to get zero points on a test, while an unsubstantiated guess may just prove to be correct. According to a recent study, the problem is rooted in the way the performance of language models is usually evaluated, with the guessing model ranked higher than a careful one that admits uncertainty. Under the standard protocols, AI systems learn that failure to generate an answer is a surefire way to get zero points on a test, while an unsubstantiated guess may end up being correct.
“Fixing scoreboards can broaden adoption of hallucination-reduction techniques,” the statement concluded, acknowledging, however, that “accuracy will never reach 100% because, regardless of model size, search and reasoning capabilities, some real-world questions are inherently unanswerable.”“Fixing scoreboards can broaden adoption of hallucination-reduction techniques,” the statement concluded, acknowledging, however, that “accuracy will never reach 100% because, regardless of model size, search and reasoning capabilities, some real-world questions are inherently unanswerable.”