
LLM Hallucination: Will an LLM score >90% on SimpleQA before 2026?
23
1kṀ30052026
60%
chance
1D
1W
1M
ALL
Using the correct given attempted metric in https://cdn.openai.com/papers/simpleqa.pdf Attempt rate must be at least 30%. No search/retrieval allowed.
"An open problem in artificial intelligence is how to train models that produce responses that are factually correct. Current language models sometimes produce false outputs or answers unsubstantiated by evidence, a problem known as “hallucinations”. Language models that generate more accurate responses with fewer hallucinations are more trustworthy and can be used in a broader range of applications. To measure the factuality of language models, we are open-sourcing(opens in a new window) a new benchmark called SimpleQA."
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Related questions
Related questions
Will RL work for LLMs "spill over" to the rest of RL by 2026?
40% chance
Will LLM hallucinations be a fixed problem by the end of 2025?
14% chance
Will an LLM report >50% score on ARC in 2025?
85% chance
Will LLM hallucinations be "largely eliminated" by 2025?
10% chance
Will hallucinations (made up facts) created by LLMs go below 1% on specific corpora before 2025?
41% chance
Will LLM hallucinations be a fixed problem by the end of 2028?
55% chance
How Will the LLM Hallucination Problem Be Solved?
6 months from now will I judge that LLMs had already peaked by Nov 2024?
16% chance
Will there be an LLM which scores above what a human can do in 2 hours on METR's eval suite before 2026?
94% chance
Will an LLM improve its own ability along some important metric well beyond the best trained LLMs before 2026?
58% chance