Will a 10B parameter multimodal RL model be trained by Deepmind in the next 12 months?
15
360Ṁ1039resolved Oct 13
Resolved
NO1H
6H
1D
1W
1M
ALL
Taken from the first prediction in the State of AI Report.
A 10B parameter multimodal RL model is trained by DeepMind, an order of magnitude larger than Gato.
This question will be resolved based on the resolution of the 2023 report.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
Sort by:
The description is a bit misleading, since Gato is not even a classical RL model, it just does similar tasks using transformers. Anyways, we will have to see the report, but I believe this will resolve positively because RT2 meets those criteria
People are also trading
Related questions
AI model training time decreases fourfold by mid-2027?
36% chance
Benchmark Gap #6: Once we have a transfer model that achieves human-level sample efficiency on many major RL environments, how many months will it be before we have a non-transfer model that achieves the same?
12
AI: Will someone train a $10B model by 2030?
85% chance
Which of the following breakthroughs will Deepmind achieve by 2030?
Will any single AI model provider announce a raise >$10B in a single round before June 30, 2026?
AI: Will someone train a $1B model by 2028?
82% chance
Before 2028, will OpenAI offer a model that can work on a task continuously for at least a week?
66% chance
Will Google Deepmind and OpenAI have a major collaborative initiative by the end of 2030? (1000 mana subsidy)
53% chance