
How many non-Transformer based models will be in the top 10 on HuggingFace Leaderboard in the 7B range by July?
16
1kṀ769Jul 2
1H
6H
1D
1W
1M
ALL
39%
0
32%
1-2
24%
3-5
4%
6+
For resolution I’ll go to the HuggingFace leaderboard and select the ~7B and uncheck everything else.
I’ll refrain from participating in the market to stay neutral in case a hybrid case comes up. I’d count both Mamba and StripedHyena as non-Transformers.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Who will have the highest ranking model on web.lmarena.ai by end of June 2025?
On January 1, 2027, a Transformer-like model will continue to hold the state-of-the-art position in most benchmark
84% chance
Will any model get above human level on the Simple Bench benchmark before September 1st, 2025.
44% chance
Will the top model by OpenAI rank 3rd (or lower) behind 2 other model families at any point before 2026?
69% chance
Will Transformer based architectures still be SOTA for language modelling by 2026?
80% chance
When will a non-Transformer model become the top open source LLM?
Will Mistral's next model make it to the top 10 models in LLM Arena by the end of 2025?
45% chance
Number of public models on 🤗Hugging Face by EOY 2024
By EOY 2025, will the model with the lowest perplexity on Common Crawl will not be based on transformers?
10% chance
Will any AI model achieve > 40% on Frontier Math before 2026?
75% chance
Sort by:
@HanchiSun I’d say sliding window is a type of attention. I’d consider LongFormers as a type of Transformer.
@HanchiSun Out of curiosity, would you bet differently if it was for the 3B category rather than 7B?
@KLiamSmith Good question. it is definitely harder to experiment with 7b than 3b. but even for 3b, i doubt more than 2 non-attention architecture will be better
People are also trading
Related questions
Who will have the highest ranking model on web.lmarena.ai by end of June 2025?
On January 1, 2027, a Transformer-like model will continue to hold the state-of-the-art position in most benchmark
84% chance
Will any model get above human level on the Simple Bench benchmark before September 1st, 2025.
44% chance
Will the top model by OpenAI rank 3rd (or lower) behind 2 other model families at any point before 2026?
69% chance
Will Transformer based architectures still be SOTA for language modelling by 2026?
80% chance
When will a non-Transformer model become the top open source LLM?
Will Mistral's next model make it to the top 10 models in LLM Arena by the end of 2025?
45% chance
Number of public models on 🤗Hugging Face by EOY 2024
By EOY 2025, will the model with the lowest perplexity on Common Crawl will not be based on transformers?
10% chance
Will any AI model achieve > 40% on Frontier Math before 2026?
75% chance