Will LLM hallucinations be down to human-expert rate within months?
Basic
34
Ṁ7311resolved Feb 2
Resolved
NO1D
1W
1M
ALL
Reid Hoffman said:
And there’s a whole bunch of very good R&D on how to massively reduce hallucinations [AI-generated inaccuracies] and get more factuality. Microsoft has been working on that pretty assiduously from last summer, as has Google. It is a solvable problem. I would bet you any sum of money you can get the hallucinations right down into the line of human-expert rate within months. So I’m not really that worried about that problem overall.
Market resolves on 1/31/2024 (bit over 3 months from now) to the result of a publicized bet made by Reid Hoffman, or my discretion if no such bet is made.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
Related questions
Related questions
Will LLM hallucinations be a fixed problem by the end of 2025?
18% chance
Will LLM hallucinations be a fixed problem by the end of 2028?
54% chance
LLM Hallucination: Will an LLM score >90% on SimpleQA before 2026?
60% chance
Will LLM hallucinations be "largely eliminated" by 2025?
10% chance
Will hallucinations (made up facts) created by LLMs go below 1% on specific corpora before 2025?
41% chance
How Will the LLM Hallucination Problem Be Solved?
A year from now, will Manifold think this prediction about the future of LLM's has held up?
68% chance
Will scaling current methods be enough to eliminate LLM hallucination?
15% chance
At the beginning of 2028, will LLMs still make egregious common-sensical errors?
43% chance
Will an LLM be able to match the ground truth >85% of the time when performing PII detection by 2024 end?
84% chance