Will LLM hallucination problems on "summarize/use this context to answer a question" be solved by April 2024?
25
112
470
May 1
6%
chance

Specfically today there exists a pattern which is: take an LLM and some search system. When the user asks the question, have the LLM feed that into search, get results, and then use those results to answer the original question. Examples of this include GPT Bing, Stripe Docs AI, etc. Sometimes these systems still run into hallucination (coming up with facts/etc not supported by the provided context/text) problems.

I'm predicting that, according to my best judgement, this will be solved as of next year. Solved here means that in 99.9% of cases (approx) this issue doersn't occur.

Get Ṁ200 play money
Sort by:
predicts YES

Additional detail: this does not include actively*trying* to make a system produce bad answers. The test is "if I give a document and a question, will it answer based on the document (or at least refuse to answer if it can't)"

Can't cheat by refusing to answer all questions. Should be "reasonably useful", in terms of how often it refuses to answer (sometimes the doc just doesn't contain an answer)