By EOY 2025, will the model with the lowest perplexity on Common Crawl will not be based on transformers?
Standard
31
Ṁ36512025
28%
chance
1D
1W
1M
ALL
If perplexity on Common Crawl is not available for models, I will use other benchmarks as a surrogate. This will inherently be a judgement process. If a model has not been announced by EOY 2025 and no benchmarks have been posted publicly, it will not be counted for the purpose of this market.
"Based on transformers" for the purpose of this question will be anything with multi-headed self-attention that feeds into an MLP.
Get
1,000
and1.00
Sort by:
@ConnorMcCormick oh yeah that's definitely confusing people. We'll, better for us who do understand it :)
@jacksonpolack The API only refreshes the data every 15 seconds, so if you're quick on the draw, it's totally doable.
Related questions
Related questions
Which of these companies will release a model that thinks before it responds like O1 from OpenAI by EOY 2024?
Will Anthropic, Google, xAI or Meta release a model that thinks before it responds like o1 from OpenAI by EOY 2024?
84% chance
Who will have the best Text-to-Image Model at the end of 2024 (as decided by the Artificial Analysis Leaderboard)?
By the end of Q2 2025 will an open source model beat OpenAI’s o1 model?
53% chance
Will transformers still be the dominant DL architecture in 2026?
52% chance
Will Transformer based architectures still be SOTA for language modelling by 2026?
69% chance
By EOY 2026, will it seem as if deep learning hit a wall by EOY 2025?
25% chance
On January 1, 2027, a Transformer-like model will continue to hold the state-of-the-art position in most benchmark
56% chance
By the end of Q1 2025 will an open source model beat OpenAI’s o1 model?
30% chance
Will Adam optimizer no longer be the default optimizer for training the best open source models by the end of 2026?
40% chance