How Will the LLM Hallucination Problem Be Solved?
➕
Plus
21
Ṁ642
2029
4%
Other
9%
Vector Embeddings (as with Pinecone https://www.pinecone.io/)
0.1%
Ensemble Combined with Fine Tuning
0.6%
Joint Embedding Predictive Architecture (https://arxiv.org/pdf/2301.08243.pdf)
0.3%
Feed Forward Algorithms (https://www.cs.toronto.edu/~hinton/FFA13.pdf)
19%
Bigger model trained on more data + RL
1.6%
Vigger models + prompt engineering
43%
It won't be
22%
Giving all LLMs access to the internet and databases of scientific papers

By the year 2028, how will the Hallucination Problem have been solved for the vast majority of applications out there?

Get
Ṁ1,000
and
S3.00
Sort by:

I'm guessing models will eventually be taught to express uncertainty in their answers. When asked "who is [person that does not exist]" many of them already correctly say they don't know who they are. I see no theoretical reason this capability couldn't be broadened

Related:

@LukeFrymire this would imply that either the options in my market are overvalued or your market is overvalued.

@PatrickDelaney I think the resolution criteria is fairly different. Mine requires that a scale-based solution is possible, yours requires it to be the primary method in production.

@LukeFrymire "larger models alone," doesn't even appear as an option on my market yet.

@VictorLevoso ught hit v instead of b in keyboard and didn't look at the question properly before clicking submit and now can't edit it or erase it.

@VictorLevoso you can sell and re-buy

How does this market resolve if it hasn't been?

@IsaacKing you could put that as an option

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules