What made gpt2-chatbot smarter (than GPT4)?
10
1kṀ546
Jan 1
64%
More and higher quality data
62%
Better model architecture design compared to GPT4 (e.g., long context, MoD, etc.)
33%
More parameters
31%
Model self-play/RL
23%
A new unsupervised training paradigm (not next token prediction) (has to be more than 200B token pretraining)
4%
Ilya sitting behind the chat model
Resolved
YES
multimodal training

We will resolve either when OpenAI gives enough information (e.g., a technical report) or based on public opinion by EOY 2024.

Resolve to any number of choices that make the model stronger.

For example, if the question is about how other models get smarter than their previous model, we will have
- Llama 3: data

- Claude 3: data, parameters(? judging on the fact that opus is 10x more expensive than Claude 2), RL, multimodal(? The multimodal trained may not have improved text ability), architecture(?)

- Gemini 1.5 Pro: multimodal, architecture (long context+MoE), data(?)

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy