Will I lead a completed pretraining of a >=1B param language model before EOY 2024?
1
50Ṁ80resolved Jan 1
Resolved
NO1H
6H
1D
1W
1M
ALL
Must be trained on at least 100B tokens, and start from random initialization. Distillation is okay only if it meets these requirements.
I'll cast an initial guess vote and then no longer participate further in this market
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will any 10 trillion+ parameter language model that follows instructions be released to the public before 2026?
43% chance
End of pre-training era for language models: Will an LM fine-tune for more FLOPs than it is pre-trained for, before 2026
44% chance
Will a lab train a >=1e26 FLOP state space model before the end of 2025?
15% chance
Will there be an AI language model that strongly surpasses ChatGPT and other OpenAI models before the end of 2025?
46% chance
Will Transformer based architectures still be SOTA for language modelling by 2026?
79% chance
Will a Large Language Model save a human life through medical advice by the end of 2025?
90% chance
Will a Large Language Model be listed as an author on a peer-reviewed paper by the end of 2025?
30% chance
By 2028, will there be a language model of less than 10B parameters that is superior to GPT-4?
84% chance
Will a language model that runs locally on a consumer cellphone beat GPT4 by EOY 2026?
78% chance
Will any open source LLM with <20 billion parameters outperform GPT-4 on most language benchmarks by the end of 2024?
13% chance