This question resolves to YES if OpenAI's vanilla GPT-4 is capable of handling at least 32,768 text tokens inside of its context window (with no modifications). Otherwise, it resolves to NO.
In case there is no simple, uncontroversial way of measuring the size of the context window, then this question will resolve to N/A. If GPT-4 is not released before 2024, this question will resolve to N/A.
Close date updated to 2024-01-01 12:00 am
People are also trading
I see @MatthewBarnett is online. Please discuss your intended resolution of this market with us before performing it.
@IsaacKing But... it is capable? It was announced in the blog post. Their developer marketing calls it "DV (8k max context)" and "DV (32k max context)" implying "same model, but we limited the context size to save on compute". Again, they probably trained on 32k and released 8k to save on compute because they knew everyone would hammer it.
@Mira Sorry, the developer marketing comment was for 3.5 being shipped earlier. But it means they have a procedure for reusing the same model with different context sizes. Which also makes sense that it should be possible, from my understanding of how GPTs work. I think it's likely 8k and 32k are "the same model".
Is OpenAI's vanilla GPT-4 capable of handling at least 32,768 text tokens inside of its context window (with no modifications)?
Doesn't seem like it to me. They explicitly describe the 32k version as a variant, and the 8k version as the "normal" version.
Even if they're the same underlying model, I think the description is asking about the software that people actually have access to.
@IsaacKing Sure - we can delay resolution until 32k gets out of waitlist and people have access to it. That would be even better, because you could test in the Playground and see if probabilities of token completion for both models are similar. If the probabilities are sufficiently correlated, I think it makes sense to say they're the "same GPT-4".
gpt-4 has a context length of 8,192 tokens. We are also providing limited access to our 32,768–context (about 50 pages of text) version, gpt-4-32k
https://openai.com/research/gpt-4
Sounds like this has to resolve N/A
@IsaacKing They probably trained the model for 32k and limited the context to 8k to save on compute during the big release. So they're both the same model, but if anything the 8k context would not be "vanilla GPT-4".
@Gigacasting It will 100% have 32k token length, there’s no doubt about it. Already DV in OpenAI Foundry has a 32k variant. This market can only resolve YES or N/A (if GPT-4 is not out by the end of the year).