Will I lead a completed pretraining of a >=1B param language model before EOY 2024?
Basic
1
Ṁ80Jan 1
87%
chance
1D
1W
1M
ALL
Must be trained on at least 100B tokens, and start from random initialization. Distillation is okay only if it meets these requirements.
I'll cast an initial guess vote and then no longer participate further in this market
Get
1,000
and1.00
Related questions
Related questions
Will there be an AI language model that strongly surpasses ChatGPT and other OpenAI models before the end of 2024?
18% chance
Will a language model that runs locally on a consumer cellphone beat GPT4 by EOY 2026?
64% chance
Will Transformer based architectures still be SOTA for language modelling by 2026?
69% chance
Will a lab train a >=1e26 FLOP state space model before the end of 2025?
25% chance
Will any language model trained without large number arithmetic be able to generalize to large number arithmetic by 2026?
53% chance
Will a Large Language Model be listed as an author on a peer-reviewed paper by the end of 2025?
38% chance
Will Meta release an open source language model that outperforms GPT-4 by the end of 2024
63% chance
By 2025 will there be a competitive large language model with >50% of the total training data generated from a large language model?
75% chance
Will there be an AI language model that surpasses ChatGPT and other OpenAI models before the end of 2025?
65% chance
Will there be an LLM which can do fluent conlang translations by EOY 2024?
57% chance