LLM Hallucination: Will an LLM score >90% on SimpleQA before 2026?
Plus
23
Ṁ30052026
60%
chance
1D
1W
1M
ALL
Using the correct given attempted metric in https://cdn.openai.com/papers/simpleqa.pdf Attempt rate must be at least 30%. No search/retrieval allowed.
"An open problem in artificial intelligence is how to train models that produce responses that are factually correct. Current language models sometimes produce false outputs or answers unsubstantiated by evidence, a problem known as “hallucinations”. Language models that generate more accurate responses with fewer hallucinations are more trustworthy and can be used in a broader range of applications. To measure the factuality of language models, we are open-sourcing(opens in a new window) a new benchmark called SimpleQA."
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will LLM hallucinations be a fixed problem by the end of 2025?
18% chance
Will LLM hallucinations be a fixed problem by the end of 2028?
54% chance
Will an LLM be able to solve confusing but elementary geometric reasoning problems in 2024? (strict LLM version)
25% chance
Will an LLM be able to solve the Self-Referential Aptitude Test before 2025?
19% chance
Will a LLM beat human experts on GPQA by Jan 1, 2025?
91% chance
How Will the LLM Hallucination Problem Be Solved?
Will we see improvements in the TruthfulQA LLM benchmark in 2024?
74% chance
Will an LLM be able to match the ground truth >85% of the time when performing PII detection by 2024 end?
84% chance
At the beginning of 2028, will LLMs still make egregious common-sensical errors?
42% chance
Will LLMs mostly overcome the Reversal Curse by the end of 2025?
67% chance