This article is from the source 'rtcom' and was first published or seen on . The next check for changes will be
You can find the current article at its original source at https://www.rt.com/news/624307-open-ai-explains-chatbot-hallucinations/
The article has changed 2 times. There is an RSS feed of changes available.
Previous version
1
Next version
Version 0 | Version 1 |
---|---|
OpenAI explains reasons for chat bot ‘hallucinations’ | |
(about 8 hours later) | |
Language models have been conditioned to make wild guesses instead of admitting ignorance, a study has found | |
The company behind ChatGPT has addressed the persistent problem of Artificial Intelligence models generating plausible but false statements that it calls “hallucinations.” | |
In a statement on Friday, OpenAI explained that models are typically encouraged to make a guess, however improbable, as opposed to acknowledging that they cannot answer a question. | |
The issue is attributable to the core principles underlying “standard training and evaluation procedures,” it added. | |
OpenAI revealed that instances in which language models “confidently generate an answer that isn’t true” continue to plague newer, more advanced iterations, including its latest flagship GPT-5 system. | |
According to a recent study, the problem is rooted in the way the performance of language models is usually evaluated, with the guessing model ranked higher than a careful one that admits uncertainty. Under the standard protocols, AI systems learn that failure to generate an answer is a surefire way to get zero points on a test, while an unsubstantiated guess may end up being correct. | |
“Fixing scoreboards can broaden adoption of hallucination-reduction techniques,” the statement concluded, acknowledging, however, that “accuracy will never reach 100% because, regardless of model size, search and reasoning capabilities, some real-world questions are inherently unanswerable.” | “Fixing scoreboards can broaden adoption of hallucination-reduction techniques,” the statement concluded, acknowledging, however, that “accuracy will never reach 100% because, regardless of model size, search and reasoning capabilities, some real-world questions are inherently unanswerable.” |
Previous version
1
Next version