How Will the LLM Hallucination Problem Be Solved?
16
299
2029
14%
Vector Embeddings (as with Pinecone https://www.pinecone.io/)
0.4%
Ensemble Combined with Fine Tuning
1.3%
Joint Embedding Predictive Architecture (https://arxiv.org/pdf/2301.08243.pdf)
0.8%
Feed Forward Algorithms (https://www.cs.toronto.edu/~hinton/FFA13.pdf)
33%
Bigger model trained on more data + RL
3%
Vigger models + prompt engineering
37%
It won't be
0.4%
Giving all LLMs access to the internet and databases of scientific papers

By the year 2028, how will the Hallucination Problem have been solved for the vast majority of applications out there?

Get Ṁ200 play money
Sort by:

Related:

@LukeFrymire this would imply that either the options in my market are overvalued or your market is overvalued.

bought Ṁ10 of Bigger model trained...

@PatrickDelaney I think the resolution criteria is fairly different. Mine requires that a scale-based solution is possible, yours requires it to be the primary method in production.

@LukeFrymire "larger models alone," doesn't even appear as an option on my market yet.

answered
Vigger models + prompt engineering
bought Ṁ10

@VictorLevoso ught hit v instead of b in keyboard and didn't look at the question properly before clicking submit and now can't edit it or erase it.

@VictorLevoso you can sell and re-buy

How does this market resolve if it hasn't been?

@IsaacKing you could put that as an option

More related questions