Will I lead a completed pretraining of a >=1B param language model before EOY 2024?
1
Ṁ50Ṁ80resolved Jan 1
Resolved
NO1H
6H
1D
1W
1M
ALL
Must be trained on at least 100B tokens, and start from random initialization. Distillation is okay only if it meets these requirements.
I'll cast an initial guess vote and then no longer participate further in this market
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will a GPT-4 level efficient HRM based language model be released before Feb 2026? [Details in description]
5% chance
Will a language model that runs locally on a consumer cellphone beat GPT4 by EOY 2026?
79% chance
End of pre-training era for language models: Will an LM fine-tune for more FLOPs than it is pre-trained for, before 2026
37% chance
Will we reverse-engineer a language model into an interpretable (python) program by 2027?
4% chance
Will AI (large language models) collapse by may 2026?
10% chance
By 2030, will large language models still be at the peak of AI? [DRAFT]
25% chance
How many distinct companies will hold the spot for [my favorite language model for >= 1 contiguous month] in 2026?