This market is duplicated from and inspired from
/Manifold/what-will-be-the-best-performance-o-nzPCsqZgPc
The best performance by an AI system on the new Last Exam benchmark as of December 31st 2025.
https://lastexam.ai/
Resolution criteria
Resolves to the best AI performance on the multimodal version of the Last Exam. This resolution will use https://scale.com/leaderboard/humanitys_last_exam as its source, if it remains up to date at the end of 2025. Otherwise, I will use my discretion in determining whether a result should be considered valid.
If the number reported is exactly on the boundary (eg. 10%) then the higher choice will be used (ie. 10-20%).
@Bayesian sounds like an incentive to finetune my deepseek-giga-overfitter-hle-memorized-v1 model by EOY
@Bayesian most likely, but maybe they'll just put an asterisk and scold it in a footnote for being sus & bad. unclear how enforcement is actually handled in practice
/Bayesian/which-of-frontiermath-and-humanitys
@mathvc @copiumarc may the person with the best model of reality win
@qumeric if the benchmark is knowledge heavy it might not do that much better than 4o? prolly will tho. just some low chance that it doesn't
“The dataset consists of 3,000 challenging questions across over a hundred subjects. We publicly release these questions, while maintaining a private test set of held out questions to assess model overfitting.”
Well sorry but people are gonna overfit to this. Who is gonna judge whether the model is overfitted or not?
@mathvc yes i am confused by this point. so if some model near EOY is massively overfit to HLE, scores 90%+, and they chime in "yeah its performance wasn't so crazy strong on our few holdout problems, it probably overfit a bit", that still counts as 90%+ right? is the holdout set just used as a separate confirmation of overfitting, its not incorporated to the main score?
@Bayesian i found that scale.ai and safe.ai partnered to create this benchmark and it seems that they keep up-to-date evaluations of all frontier models:
https://scale.com/leaderboard/humanitys_last_exam
I guess we can trust their judgment? That is, they will not put a clearly overfitted model on the leaderboard since it makes the leaderboard useless
@Bayesian i dunno i think all benchmarks have caveats so i'd just pick some source for what each model has achieved on the benchmark & if their screener for overfitting is weak that's kinda priced in