Will ChatGPT have a level of "fact-checking" by the end of 2023?
resolved Jan 1

Will be resolved via if an article that talks about it coming with a level of checking its sources before continuing, and an actually sufficient level of "fact-checking" is released (enough so that it can fix basic logical errors in its own work without being told to do so explicitly by the user).

A model option only available on ChatGPT+ would resolve this as Yes, but a plugin would not be enough to resolve this as it's not built-in.

Get Ṁ600 play money

🏅 Top traders

#NameTotal profit
Sort by:
bought Ṁ10 of NO

I don't think that the ChatGPT will have the ability of fact-checking . Cause the any results got by ChatGPT is the results of big data with the algorithm. There is no model for checking the data resource during the building of the AI. If the ChatGPT wants to develop this kind of ability, it needs to be reconstructed, the query of it will be changed completed, the developer may have to create his own query and engine for that. Obviously, that's impossible.

@JiaqiuHuangfu I don't agree with him, I think it's still possible by the end of 2023.ChatGPT currently lacks source verification; however, targeted data analysis and statistical metrics could enable basic fact-checking capabilities by 2023. In particular, training on verified datasets provides a basis for validating responses against known facts as an authority benchmark. Statistical analysis could then identify natural language patterns and inconsistencies among responses that deviate from accuracy in factual matters. Techniques such as anomaly detection and natural language processing to flag semantic inconsistencies may also help. Regression analysis allows researchers to model the relationship between factual accuracy and language features. These statistical models could assign confidence scores or flag potentially inaccurate responses; while not providing comprehensive fact-checking capabilities, modeling responses against verified sources provides a data-driven way of detecting discrepancies and inconsistencies that might otherwise go undetected. Through meticulous statistical modeling techniques and modeling approaches, ChatGPT could gain limited "fact-checking" abilities without fundamental architectural changes; setting the foundation for more thorough credibility analysis using hard metrics in future iterations of ChatGPT.

article from Anthropic discusses incrementally training and evaluating ChatGPT to improve factual consistency: https://www.anthropic.com/blog/anthropic-research/factual-consistency

OpenAI blog post introduces annotating and evaluating ChatGPT mistakes to enable more accurate responses: https://openai.com/blog/annotating-chatgpt-mistakes/

bought Ṁ15 of NO

Developers face significant hurdles in releasing a fact-checking AI tool. Beyond technical capability, challenges include potential bias and subjectivity in AI responses. There are resource constraints for maintaining a big fact-checking system, requiring reliable databases and continuous updates. The risk of providing inaccurate information complicates matters. Moreover, public skepticism about controlled information sources must be addressed to gain trust in the tool. The evolving information landscape poses additional complexities. These factors collectively make the release of such a tool more intricate and uncertain.

Hsu, T., & Thompson, S. A. (2023, September 29). Fact checkers take stock of their efforts: “it’s not getting better.” The New York Times. https://www.nytimes.com/2023/09/29/business/media/fact-checkers-misinformation.html
ChatGPT (2023, October, 8) Fact Check AI futures


predicted NO

The primary concern regarding the reliability of information from chat GPT is the lack of source verification. These models do not possess the ability to verify or cross-reference the credibility, accuracy, or trustworthiness of the sources from which they have learned. The information they generate is based on patterns and data encountered during training. it would take a drastic change to implement a "fact-checking" by the end of 2023.


I agree with his viewpoint on the greater concern involving AI is the source of the data with which it provides a response. I would have definitely made the same bet in stating that Chat GPT will not have a fact-checking system in place by the end of 2023. While Automatic Fact-Checking (AFC) has been heavily invested into starting in 2016/2017 with the AI coming into the forefront, there are many nuances that AI cannot interpret properly as of yet. It was said in an article from Oxford University, that research and Start-up seed funding exceeded $2,865,000 (Using currency conversions from 11/6/2023. This article goes quite deep into what is being studied for AFCs and what the limitations they are trying to overcome at the time of the article. It is important to note that there were nearly no paid employees doing the research and that the labor cost to have such a process for a person’s AI would be extreme.


@RahulShah What just happened?

bought Ṁ10,000 of NO

@firstuserhere It seems like OAI has put more emphasis on multimodal models rather than making a new model just for fact-checking or improving logical/math capabilities. Nothing much else other than me getting a Mana inflow and filling bets I had removed earlier when in a crunch to pay for Manifest lol

bought Ṁ27 of YES


an actually sufficient level of "fact-checking" is released (enough so that it can fix basic logical errors in its own work without being told to do so explicitly by the user).

Sounds to me like this could be done to the traditional GPT3.5 or 4 models instead of releasing a new model for just this

predicted NO

@firstuserhere Well this would clearly have been resolved if that qualified haha. I have seen numerous cases where it misses pretty mistakes (GPT-4 said $$1/n = 1/n^2$$ to me yesterday in a proof when it couldn't figure out how else to make the proof work.)

predicted YES

@RahulShah Maybe they got something special cooking for the dev day :p

Anyway, seems unlikely. My YES bets are simply because I thought the % is too low.

predicted NO

@firstuserhere I'd be very happy to resolve this as Yes, it'd be amazing if they put an emphasis on fact-checking and stopping hallucinations! So I am also hoping they are cooking for dev day :)

predicted YES

@RahulShah This is a very important question, and efforts in this direction are worth attempting. Recently I was awarded 50k budget to subsidize good markets around manifold, so i'm gonna use 1k out of that to add some extra liquidity to this market.

bought Ṁ500 of YES



Just wanted to comment on how horrifying the AI art is for this market

predicted YES

@bjubes Yes, it's the baboon T-800.

More related questions