https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/
“This isn’t fixable,” said Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.”
How true will this end up being? At the end of 2028 I will evalaute whether the hallucination problem for LLMs has been fixed or still exists. If hallucinations have been solved this market resolves YES. If the outstanding hallucination problem still exists, this market will resolve NO.
This is a followup market to this related market:
Update 2025-05-05 (PST) (AI summary of creator comment): The evaluation will focus on the full AI product available to users (e.g., a future version of ChatGPT like GPT-6), rather than just the raw underlying Large Language Model (LLM) in isolation. This means that systems which integrate the LLM with auxiliary components such as:
Web search
Knowledge bases
Citation mechanisms to reduce or eliminate hallucinations will be considered when determining if the problem is 'fixed'.