Is code-davinci-002 just the largest non-GPT-4 model in the GPT-4 scaling law experiment?
3
68
Ṁ35Ṁ90
Feb 17
44%
chance
1D
1W
1M
ALL
https://arxiv.org/pdf/2303.08774.pdf
Resolves Yes/No when I have >90% confidence.
Get Ṁ200 play money
Related questions
Related questions
Is GPT-2 the response to phi-3 model ? (with ~GPT-2 size but trained with the next gen OpenAI models)
10% chance
Is GPT-4 (0613) more capable than GPT-4 (0314)?
71% chance
Will GPT-5 have fewer parameters than GPT-4? (1500M subsidy)
15% chance
Will GPT-4 have Einstein mode?
7% chance
Will GPT-4 be trained (roughly) compute-optimally using the best-known scaling laws at the time?
30% chance
Will GPT-5 have more than 10 trillion parameters?
29% chance
Will GPT-4 escape?
9% chance
How many parameters will GPT-4 have?
Is gpt2-chatbot OpenAI's next-generation model (e.g. GPT-4.5, GPT-5, etc.)
10% chance
Will any open-source model achieve GPT-4 level performance on MMLU through 2024?
83% chance