
Is scale unnecessary for intelligence (<10B param human-competitive STEM model before 2027)?
18
1kṀ63802027
30%
chance
1H
6H
1D
1W
1M
ALL
Resolves yes if before 2027, a neural net with <10B parameters achieves all of: >75% on GPQA, >80% on SWE-bench verified, and >95% on MATH
Arbitrary scaffolding allowed (retrieval over fixed DB is ok), no talking with other AI, no internet access. We'll allow up to 1 minute of time per question. We'll use whatever tools are available at the time to determine whether such an AI memorized the answers to these datasets; if verbatim memorization obviously happened, the model will be disqualified.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Is scale unnecessary for intelligence (<10B param human-competitive STEM model before 2030)?
75% chance
Will scaling transformers lead to a 60% score on ARC-AGI-2?
59% chance
Will any model get above human level on the Simple Bench benchmark before September 1st, 2025.
42% chance
Conditional on no existential catastrophe, will there be a superintelligence by 2100?
90% chance
Will a single model achieve superhuman performance on all Atari environments by 2025?
22% chance
Conditional on no existential catastrophe, will there be a superintelligence by 2050?
72% chance
Conditional on no existential catastrophe, will there be a superintelligence by 2040?
62% chance
An AI model with 100 trillion parameters exists by the end of 2025?
20% chance
Conditional on no existential catastrophe, will there be a superintelligence by 2030?
40% chance
Will an AI model achieve superhuman ELO on Codeforces by the 31 December 2025?
35% chance