Does hidden interpolation explain >25% of AI progress?
4
1kṀ500
2026
55%
chance

Some of the apparent generalisation of LLMs is actually hidden interpolation on semantic duplicates of the test set that get included in training corpuses. These are really hard to filter out.

e.g. See that at least 7 of the 30 AIME 2025 questions were present on the internet before the competition.


Resolution: at the end of next year, will I believe that >25% of apparent AI progress is actually hidden interpolation?

My current fraction (Dec 2025): 35%

If you want to use a model of me as well as your model of AI to answer, here are some of my views.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy