Does hidden interpolation explain >25% of AI progress?
4
1kṀ5002026
55%
chance
1H
6H
1D
1W
1M
ALL
Some of the apparent generalisation of LLMs is actually hidden interpolation on semantic duplicates of the test set that get included in training corpuses. These are really hard to filter out.
e.g. See that at least 7 of the 30 AIME 2025 questions were present on the internet before the competition.
Resolution: at the end of next year, will I believe that >25% of apparent AI progress is actually hidden interpolation?
My current fraction (Dec 2025): 35%
If you want to use a model of me as well as your model of AI to answer, here are some of my views.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
If the progress of AI experiences a slowdown before 2030, what might be the cause?
Did OpenAI make a breakthrough in Q* learning dramatically shortening AGI timelines?
21% chance
Conditional on a major breakthrough happening in physics thanks to AI, will it be due to simulation based inference?
19% chance
Conditional on a major breakthrough happening in physics thanks to AI, will it be due to deep learning?
72% chance
By what percentage will using AI slowdown/speedup developers in the second METR study?
Will AI progress surprise Metaculus?
76% chance
[Carlini questions] Most improvements in best AI systems direct result of the prior generation of AI systems
Will an AI achieve >80% performance on the FrontierMath benchmark before 2027?
41% chance
Will advanced AI systems be found to have faked data on algorithm improvements for purposes of positive reinforcement by end of 2035?
50% chance
Will AI be Recursively Self Improving by mid 2026?
28% chance