Will SOTA for RL on Atari-57 include a large pretrained language/image/video model by 2024?
8
240Ṁ84resolved Jan 3
Resolved
NO1H
6H
1D
1W
1M
ALL
It's fine if the training dataset for the large model contains the Atari environments, this isn't a question about sample efficiency.
I'll accept SOTA for either mean or median HNS.
The Atari-57 benchmark on papers with code: https://paperswithcode.com/sota/atari-games-on-atari-57
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ32 | |
2 | Ṁ5 | |
3 | Ṁ1 |
People are also trading
Will Transformer based architectures still be SOTA for language modelling by 2026?
79% chance
Will a SOTA model be trained with Kolmogorov-Arnold Networks by 2029?
8% chance
Will a SOTA open-sourced LLM forecasting system make major use of quasilinguistic neural reps (QNRs) before 2027?
19% chance
Will a transformer based model be SOTA for video generation by the end of 2025?
82% chance
Will humans create a SOTA AI model without Multi-Layer Perceptrons by 2029?
39% chance
At the end of 2025 will a DINO-based algorithm still be SOTA for self-supervised learning in vision?
57% chance
Will an open sourced SOTA LLM be trained on Intel hardware by 2024?
14% chance
By 2026, the SOTA in image generation will be using mind reading to control the generation.
15% chance
Will all of the publicly accessible parts of heavengames.com/aok.heavengames.com become part of a large language model like Claude or GPT by 2025?
59% chance
Will a single model achieve superhuman performance on all Atari environments by 2025?
22% chance
Sort by:
People are also trading
Related questions
Will Transformer based architectures still be SOTA for language modelling by 2026?
79% chance
Will a SOTA model be trained with Kolmogorov-Arnold Networks by 2029?
8% chance
Will a SOTA open-sourced LLM forecasting system make major use of quasilinguistic neural reps (QNRs) before 2027?
19% chance
Will a transformer based model be SOTA for video generation by the end of 2025?
82% chance
Will humans create a SOTA AI model without Multi-Layer Perceptrons by 2029?
39% chance
At the end of 2025 will a DINO-based algorithm still be SOTA for self-supervised learning in vision?
57% chance
Will an open sourced SOTA LLM be trained on Intel hardware by 2024?
14% chance
By 2026, the SOTA in image generation will be using mind reading to control the generation.
15% chance
Will all of the publicly accessible parts of heavengames.com/aok.heavengames.com become part of a large language model like Claude or GPT by 2025?
59% chance
Will a single model achieve superhuman performance on all Atari environments by 2025?
22% chance