LLM Hallucination: Will an LLM score >90% on SimpleQA before 2026?

Plus

21

Ṁ15952026

55%

chance

1D

1W

1M

ALL

Using the correct given attempted metric in https://cdn.openai.com/papers/simpleqa.pdf Attempt rate must be at least 30%. No search/retrieval allowed.

"An open problem in artificial intelligence is how to train models that produce responses that are factually correct. Current language models sometimes produce false outputs or answers unsubstantiated by evidence, a problem known as “hallucinations”. Language models that generate more accurate responses with fewer hallucinations are more trustworthy and can be used in a broader range of applications. To measure the factuality of language models, we are open-sourcing(opens in a new window) a new benchmark called SimpleQA."

This question is managed and resolved by Manifold.

Get

1,000

and3.00

## Related questions

## Related questions

Will LLM hallucinations be a fixed problem by the end of 2025?

22% chance

Will LLM hallucinations be a fixed problem by the end of 2028?

51% chance

At the beginning of 2028, will LLMs still make egregious common-sensical errors?

44% chance

How Will the LLM Hallucination Problem Be Solved?

Will LLMs mostly overcome the Reversal Curse by the end of 2025?

66% chance

Will an LLM be able to solve the Self-Referential Aptitude Test before 2027?

66% chance

Will an LLM report >50% score on ARC in 2025?

82% chance

Will an LLM be able to solve the Self-Referential Aptitude Test before 2025?

20% chance

Will hallucinations (made up facts) created by LLMs go below 1% on specific corpora before 2025?

41% chance

Will a LLM beat human experts on GPQA by Jan 1, 2025?

90% chance