This market is quite similar to the following market. However, instead of focusing on the papers, it focuses a bit more on the practicality aspect by considering a mixture of the most commonly used models and the SOTA models at the end of 2026.
I am including both Adam and close variants (e.g. AdamW).
I will sample a mixture of open source best models in 2026. Not participating in this market in case judgement is required.
If training details are not provided, can reach out to the creators of the models, if they decline, we move on and pick the next best thing
Relevant: AlgoPerf
The AlgoPerf: Training algorithms benchmark competition opens on November 28, 2023, and is scheduled to close on March 28, 2024. To enter the competition please see the instructions on the competition website. Additionally, the accompanying technical report motivates and explains the design choices of the benchmark.
Sponsorship & Prize Money
MLCommons is offering a total prize pool of $50,000, to be awarded by a committee, for the top-performing submissions in each tuning ruleset.
We would also like to express our gratitude to Google for their generous support in providing computational resources to score the top submissions, and resources to help score promising submissions from submitters with more limited resources.
Jack Clark says:
With AlgoPerf, we might finally have a decent, principled way to evaluate new optimizers and work out if they’re actually any good
My Lilith will dethrone it :)
@NiplavYushtun Yup. And aside from minor variations it’s been the staple ever since. It seems to survive everything and works on new architectures that weren’t even invented in 2014, like transformers. Makes me think it’ll still be around for a couple more years at least.