Will LLM hallucinations be down to human-expert rate within months?
34
Ṁ685Ṁ7.3kresolved Feb 2
Resolved
NO1H
6H
1D
1W
1M
ALL
Reid Hoffman said:
And there’s a whole bunch of very good R&D on how to massively reduce hallucinations [AI-generated inaccuracies] and get more factuality. Microsoft has been working on that pretty assiduously from last summer, as has Google. It is a solvable problem. I would bet you any sum of money you can get the hallucinations right down into the line of human-expert rate within months. So I’m not really that worried about that problem overall.
Market resolves on 1/31/2024 (bit over 3 months from now) to the result of a publicized bet made by Reid Hoffman, or my discretion if no such bet is made.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ108 | |
| 2 | Ṁ24 | |
| 3 | Ṁ23 | |
| 4 | Ṁ19 | |
| 5 | Ṁ14 |
Sort by:
People are also trading
Related questions
Will LLM hallucinations be a fixed problem by the end of 2028?
43% chance
How Will the LLM Hallucination Problem Be Solved?
Will scaling current methods be enough to eliminate LLM hallucination?
15% chance
Will LLMs be worse than human level at forecasting when they are superhuman at most things?
41% chance
Before 2027, will a frontier AI model achieve an AA-Omniscience hallucination rate below 5%?
32% chance
Do LLMs experience qualia?
40% chance
Will LLMs become a ubiquitous part of everyday life by June 2026?
90% chance
Before 2027, will OpenAI release a Frontier Model trained according to their "Why LLMs hallucinate" paper?
49% chance
Will an LLM improve its own ability along some important metric well beyond the best trained LLMs before 2026?
14% chance
Will there by a major breakthrough in LLM continual learning before 2027?
45% chance