By EOY 2025, will the model with the lowest perplexity on Common Crawl will not be based on transformers?
Plus
31
Ṁ36512025
28%
chance
1D
1W
1M
ALL
If perplexity on Common Crawl is not available for models, I will use other benchmarks as a surrogate. This will inherently be a judgement process. If a model has not been announced by EOY 2025 and no benchmarks have been posted publicly, it will not be counted for the purpose of this market.
"Based on transformers" for the purpose of this question will be anything with multi-headed self-attention that feeds into an MLP.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
@ConnorMcCormick oh yeah that's definitely confusing people. We'll, better for us who do understand it :)
@jacksonpolack The API only refreshes the data every 15 seconds, so if you're quick on the draw, it's totally doable.
Related questions
Related questions
Will Anthropic, Google, xAI or Meta release a model that thinks before it responds like o1 from OpenAI by EOY 2024?
75% chance
Will Transformer based architectures still be SOTA for language modelling by 2026?
68% chance
By EOY 2026, will it seem as if deep learning hit a wall by EOY 2025?
24% chance
When will a non-Transformer model become the top open source LLM?
On January 1, 2027, a Transformer-like model will continue to hold the state-of-the-art position in most benchmark
54% chance
Will OpenAI release an image model better than DALL-E 3 in 2024?
67% chance
Which of these companies will release a model that thinks before it responds like O1 from OpenAI by EOY 2024?
Will openAI have the most accurate LLM across most benchmarks by EOY 2024?
39% chance
Will any open-source Transformers LLM model that function as a dense mixture of experts be released by end of 2024?
50% chance
By the end of Q2 2025 will an open source model beat OpenAI’s o1 model?
61% chance