Will a 10B parameter multimodal RL model be trained by Deepmind in the next 12 months?
Basic
15
Ṁ1039resolved Oct 13
Resolved
NO1D
1W
1M
ALL
Taken from the first prediction in the State of AI Report.
A 10B parameter multimodal RL model is trained by DeepMind, an order of magnitude larger than Gato.
This question will be resolved based on the resolution of the 2023 report.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
The description is a bit misleading, since Gato is not even a classical RL model, it just does similar tasks using transformers. Anyways, we will have to see the report, but I believe this will resolve positively because RT2 meets those criteria
Related questions
Related questions
Will any 10 trillion+ parameter language model that follows instructions be released to the public before 2026?
48% chance
By March 14, 2025, will there be an AI model with over 10 trillion parameters?
14% chance
Will Google Deepmind and OpenAI have a major collaborative initiative by the end of 2030? (1000 mana subsidy)
53% chance
Which of the following breakthroughs will Deepmind achieve by 2030?
An AI model with 100 trillion parameters exists by the end of 2025?
22% chance
Will OpenAI offer a model that updates its weights while running during 2025?
26% chance
Will Google Deepmind and OpenAI have a major collaborative initiative by the end of 2025? (1000 mana subsidy)
16% chance
AI: Will someone train a $10T model by 2100?
57% chance
Will OpenAI announce a multi-modal AI capable of any input-output modality combination by end of 2025? ($1000M subsidy)
85% chance
Will OpenAI release next-generation models with varying capabilities and sizes?
77% chance