How Will the LLM Hallucination Problem Be Solved?
7%
Other
12%
Vector Embeddings (as with Pinecone https://www.pinecone.io/)
0.3%
Ensemble Combined with Fine Tuning
1.1%
Joint Embedding Predictive Architecture (https://arxiv.org/pdf/2301.08243.pdf)
0.6%
Feed Forward Algorithms (https://www.cs.toronto.edu/~hinton/FFA13.pdf)
23%
Bigger model trained on more data + RL
3%
Vigger models + prompt engineering
50%
It won't be
0.3%
Giving all LLMs access to the internet and databases of scientific papers

By the year 2028, how will the Hallucination Problem have been solved for the vast majority of applications out there?

Get Ṁ600 play money
Sort by:

I'm guessing models will eventually be taught to express uncertainty in their answers. When asked "who is [person that does not exist]" many of them already correctly say they don't know who they are. I see no theoretical reason this capability couldn't be broadened

Related:

@LukeFrymire this would imply that either the options in my market are overvalued or your market is overvalued.

@PatrickDelaney I think the resolution criteria is fairly different. Mine requires that a scale-based solution is possible, yours requires it to be the primary method in production.

@LukeFrymire "larger models alone," doesn't even appear as an option on my market yet.

@VictorLevoso ught hit v instead of b in keyboard and didn't look at the question properly before clicking submit and now can't edit it or erase it.

@VictorLevoso you can sell and re-buy

How does this market resolve if it hasn't been?

@IsaacKing you could put that as an option