
How Will the LLM Hallucination Problem Be Solved?
32
Ṁ1kṀ9772029
9%
8%
Vector Embeddings (as with Pinecone https://www.pinecone.io/)
2%
Filtering (as with Deepmind AlphaCode https://www.deepmind.com/blog/competitive-programming-with-alphacode)
0.1%
Ensemble Combined with Fine Tuning
0.5%
Joint Embedding Predictive Architecture (https://arxiv.org/pdf/2301.08243.pdf)
1%
Feed Forward Algorithms (https://www.cs.toronto.edu/~hinton/FFA13.pdf)
9%
Bigger model trained on more data + RL
1.4%
Vigger models + prompt engineering
48%
It won't be
20%
Giving all LLMs access to the internet and databases of scientific papers
By the year 2028, how will the Hallucination Problem have been solved for the vast majority of applications out there?
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Will LLM hallucinations be a fixed problem by the end of 2028?
43% chance
Will scaling current methods be enough to eliminate LLM hallucination?
15% chance
Do LLMs experience qualia?
40% chance
By which date will the state-of-the-art LLM use latent space to reason?
Will LLMs such as GPT4 be considered a solution to Moravec’s paradox by 2030?
20% chance
Will an LLM be able to solve the Self-Referential Aptitude Test before 2027?
79% chance
When will the next paradigm in LLMs (after reasoning) be released?
Will any author of "Clever Hans or Neural Theory of Mind?" say that LLMs have some robust Theory-of-Mind (ToM) in 2026?
44% chance
Will someone train an LLM using a dataset that has had all references to consciousness removed?
25% chance
Sort by:
@LukeFrymire this would imply that either the options in my market are overvalued or your market is overvalued.
@PatrickDelaney I think the resolution criteria is fairly different. Mine requires that a scale-based solution is possible, yours requires it to be the primary method in production.
@VictorLevoso ught hit v instead of b in keyboard and didn't look at the question properly before clicking submit and now can't edit it or erase it.
People are also trading
Related questions
Will LLM hallucinations be a fixed problem by the end of 2028?
43% chance
Will scaling current methods be enough to eliminate LLM hallucination?
15% chance
Do LLMs experience qualia?
40% chance
By which date will the state-of-the-art LLM use latent space to reason?
Will LLMs such as GPT4 be considered a solution to Moravec’s paradox by 2030?
20% chance
Will an LLM be able to solve the Self-Referential Aptitude Test before 2027?
79% chance
When will the next paradigm in LLMs (after reasoning) be released?
Will any author of "Clever Hans or Neural Theory of Mind?" say that LLMs have some robust Theory-of-Mind (ToM) in 2026?
44% chance
Will someone train an LLM using a dataset that has had all references to consciousness removed?
25% chance