Is code-davinci-002 just the largest non-GPT-4 model in the GPT-4 scaling law experiment?
Basic
3
Ṁ35Feb 17
44%
chance
1D
1W
1M
ALL
https://arxiv.org/pdf/2303.08774.pdf
Resolves Yes/No when I have >90% confidence.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
What will be true about GPT-5?
Is GPT-4 best? (Thru 2025)
63% chance
Will GPT-5 have more than 5 trillion parameters?
71% chance
Will GPT-5 have more than 10 trillion parameters?
31% chance
Will GPT-4 be trained (roughly) compute-optimally using the best-known scaling laws at the time?
30% chance
What will be true about GPT-5? (See description)
If we find out in 2024, was o1's Transformer base trained on 10+x as much compute as GPT-4's?
19% chance
Will GPT-4 improve on the Chinchilla scaling law?
43% chance
Will the performance jump from GPT4->GPT5 be less than the one from GPT3->GPT4?
75% chance
What hardware will GPT-5 be trained on?