Will LLM hallucinations be a fixed problem by the end of 2025?


“This isn’t fixable,” said Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.”

How true will this end up being? At the end of 2025 I will evalaute whether the hallucination problem for LLMs has been fixed or still exists. If hallucinations have been solved this market resolves YES. If the outstanding hallucination problem still exists, this market will resolve NO.

Related market:

Get Ṁ600 play money
Sort by:

You can almost always find hallucinations if you're looking for them, even in bleeding-edge LLMs. One pretty reliable method is to take a well-known trick question and modify it to remove the trick. Even the top LLMs will often still respond as if you'd asked the original version of the trick question, rather than what you actually asked. Here's an example from GPT4o:

Memory is flawed. Humans hallucinate all the time just like gpt4

predicts YES

I'm surprised that this market isn't higher, it seems that there has been notable progress on the accuracy of llm-generated content over the last ~1 year, and this seems likely to continue.

Its plausible that hallucinations are an inherent artifact of such stochastic generative systems, but "solving" hallucinations appears to be reducible to a problem of detecting when they occur, and supplementing with external sources or otherwise signaling the uncertainty. Perhaps I'm unclear on the resolution criteria, but as I understand it the 98% accuracy with RAG described by openai would have been sufficient to resolve yes (If it was actually achieved).

There also seems to be some tradeoff between an llm being able to provide a correct answer and not providing incorrect answers, as you can always raise the bar of certainty required for a fact to be stated. I think a system that tells the right answer 80% of the time, is wrong 0.1%, and otherwise explains it does not know is more useful than one that answers correctly 99% and is wrong 1%. If such a ratio as 80%/19.9%/0.1% is plausible with fine-tuning current systems to hedge more frequently, would this pass for hallucinations being solved? If the rate of falsehoods is the primary factor, then I feel it should, as it would be greater than human level.

Its also worth considering that hallucinations are one of primary (if not THE primary?) roadblocks preventing more widespread use of current llms. It seems likely that huge resources will be put into improving their factual accuracy over the next two years.

Is there something in particular that I'm missing?

bought Ṁ100 NO from 33% to 31%
bought Ṁ100 of NO

@CollinGray you are describing a machine that can tell if an arbitrary statement is true or not, we can't do that as humans, why do you think it's going to be easy to do with an llm?

Otherwise I agree with you I'd rather have something that's only wrong 0.1% of the time, even if I had to sacrifice some answers that were correct but not confident. I disagree with your numbers though, 80% of the time is not good enough, that would be too low a threshold and would result in the ai only being able to answer trivially true things.

predicts YES

@Odoacre You're right that 80% of questions being answered is too low to be very useful, but my larger point with that was that "solving hallucinations" is a very fuzzy target, as you can always reduce fake answers at the cost of correct answers by rewarding hedging, and so I'm unclear about what a system which fulfills this market would look like.

As for your first point, I agree that determining whether an arbitrary statement is true or not is very difficult, but llms have more information to work with, i.e. a probability distribution over all possible answers. The assumption here is that unlike hallucinated facts, memorized facts are far more self consistent. For example, "Washington" will dominate the next-token probability distribution(s) for "The 1st president was", but "The 200th president was" will result in a distribution with multiple top possibilities. For a better explanation than I can give, check out this paper

This depends largely on how you define an "LLM"

could an LLM plus a bunch of non-LLM infrastructure be forced to forgo all hallucinations?

could a different architecture to current day LLMs which actually performs cognition like operations on it's internal knowledge, but which superficially resembles LLMs externally forgo all hallucinations?

Could a pure LLM forgo hallucination?
Not with any training dataset which presently exists.

There's many degrees to "fixing" hallucinations. If we're talking about always stating things with 100% accuracy or replying with "unknown", then the LLMs will be restricted to producing mathematical formulas alone as anything else could be potentially attacked as a "hallucination" due to not being precise enough.

If we're talking about being less prone to hallucinations that a human with access to Google, then it's a much more realistic proposition and I would be YES on that.

predicts YES

Relevant - because if they did, then it's more or less solved.

predicts NO

@firstuserhere Eh, it's highly doubtful that getting a high benchmark in that kind of controlled environment is equivalent to solving the problem, imo. Like, I would still be very surprised if it doesn't still hallucinate about things right at the edge of its knowledge context or when it comes to more niche domain-specific questions.

predicts YES

@Sphinxfire That's why i bought it up only by <10%

@firstuserhere Tbh they aren't related. Also already mentioned why that market should just be N/Aed.

Gary Marcus offered to bet $100,000 on this! Seems like an important question, and personally I'm surprised to see this as high as 23% when Marcus is offering to bet so much against it! I'm going to add in a 2000 mana subsidy here as part the Public Interest Subsidy Program.

bought Ṁ200 NO from 30% to 28%

Oh, and I see someone made an even more specific market with a 3 month timeline in response to Hoffman! Arb away!

I have created a new related market set to the end of 2028.

Might depend on

Does the solution have to be widely implemented in publicly available free LLMs for this to resolve as yes or is it enough if a solution is shown to exist?

@JohnEg It just needs to exist!

predicts YES

@SneakySly then I also see the issue as to what counts here. An LLM with severely restricted output can't hallucinate. Tying it's output to existing documentation is already done in production so should this market resolve as YES now?

As for wider stuff, here's this for example: https://blog.normalcomputing.ai/posts/2023-07-27-regex-guided-generation/regex-guided-generation.html

@JohnEg Ah I see, my point above was that any single relevant actor can have a solution, for example if GPT-6 solved hallucinations that counts for YES even if whatever version of Claude can’t.

We don’t have a robust non-narrow solution though. Obviously this market as is resolves NO.

predicts YES

@SneakySly what if there is a solution to hallucinations that works reliably in a domain that general "do anything" models don't want to implement?

@JohnEg Probably not counting as solved then as it would still be an open problem for the relevant LLM use cases.

New Gary Marcus substack today:

If hallucinations aren’t fixable, generative AI probably isn’t going to make a trillion dollars a year. And if it probably isn’t going to make a trillion dollars a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.

bought Ṁ50 of NO

@SneakySly Just Gary being Gary. The literal Terminator could be about to crush his skull and he'd be condescendingly telling it that it's not intelligent in any real sense.

predicts NO

@jonsimon That said, don't expect hallucinations to go anywhere. People confabulate all the time, don't expect any different.

More related questions