Will a 10B parameter multimodal RL model be trained by Deepmind in the next 12 months?
15
360Ṁ1039resolved Oct 13
Resolved
NO1H
6H
1D
1W
1M
ALL
Taken from the first prediction in the State of AI Report.
A 10B parameter multimodal RL model is trained by DeepMind, an order of magnitude larger than Gato.
This question will be resolved based on the resolution of the 2023 report.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will any 10 trillion+ parameter language model that follows instructions be released to the public before 2026?
40% chance
Will OpenAI announce a multi-modal AI capable of any input-output modality combination by end of 2025? ($1000M subsidy)
83% chance
Will OpenAI release next-generation models with varying capabilities and sizes?
68% chance
Will Google Deepmind and OpenAI have a major collaborative initiative by the end of 2025? (1000 mana subsidy)
25% chance
An AI model with 100 trillion parameters exists by the end of 2025?
20% chance
AI model training time decreases fourfold by mid-2027?
36% chance
Benchmark Gap #6: Once we have a transfer model that achieves human-level sample efficiency on many major RL environments, how many months will it be before we have a non-transfer model that achieves the same?
12
AI: Will someone train a $10B model by 2030?
85% chance
Which of the following breakthroughs will Deepmind achieve by 2030?
AI: Will someone train a $1B model by 2028?
81% chance