Will GPT-4 have context window of at least 32,768 text tokens?
28
480Ṁ6144
resolved Mar 16
Resolved
N/A

This question resolves to YES if OpenAI's vanilla GPT-4 is capable of handling at least 32,768 text tokens inside of its context window (with no modifications). Otherwise, it resolves to NO.

In case there is no simple, uncontroversial way of measuring the size of the context window, then this question will resolve to N/A. If GPT-4 is not released before 2024, this question will resolve to N/A.

Close date updated to 2024-01-01 12:00 am

Get
Ṁ1,000
to start trading!
Sort by:

Sorry everyone. I'll try to provide better criteria next time.

@MatthewBarnett Noooo....

@R2D2 Wait, it looks like I didn't lose my mana? Only the profit is affected, right?

predictedNO

@R2D2 All trades are undone. You only end up losing M$0.1 per bet you placed to the platform fee.

Vanilla is capped at 8k;

gpt-4-32k in the model window

I don’t have any mana in this market, but it seems like a pretty clear N/A based on the comments - “In case there is no simple, uncontroversial way of measuring the size of the context window, then this question will resolve to N/A”. The way of measuring is controversial, so it will resolve N/A.

I see @MatthewBarnett is online. Please discuss your intended resolution of this market with us before performing it.

“OpenAI's vanilla GPT-4 is capable of handling”

Interesting. I can't like this comment because Gigacasting has me blocked, but I can respond to it? Weird.

Anyway, I agree that this line from the description makes it pretty clear this market should resolve NO.

@IsaacKing But... it is capable? It was announced in the blog post. Their developer marketing calls it "DV (8k max context)" and "DV (32k max context)" implying "same model, but we limited the context size to save on compute". Again, they probably trained on 32k and released 8k to save on compute because they knew everyone would hammer it.

predictedYES

@Mira Sorry, the developer marketing comment was for 3.5 being shipped earlier. But it means they have a procedure for reusing the same model with different context sizes. Which also makes sense that it should be possible, from my understanding of how GPTs work. I think it's likely 8k and 32k are "the same model".

Is OpenAI's vanilla GPT-4 capable of handling at least 32,768 text tokens inside of its context window (with no modifications)?

Doesn't seem like it to me. They explicitly describe the 32k version as a variant, and the 8k version as the "normal" version.

Even if they're the same underlying model, I think the description is asking about the software that people actually have access to.

predictedYES

@IsaacKing Sure - we can delay resolution until 32k gets out of waitlist and people have access to it. That would be even better, because you could test in the Playground and see if probabilities of token completion for both models are similar. If the probabilities are sufficiently correlated, I think it makes sense to say they're the "same GPT-4".

gpt-4 has a context length of 8,192 tokens. We are also providing limited access to our 32,768–context (about 50 pages of text) version, gpt-4-32k

https://openai.com/research/gpt-4

Sounds like this has to resolve N/A

@Gabrielle Seems pretty clear that the vanilla GPT-4 model has a context length of 8,192 tokens.

@IsaacKing They probably trained the model for 32k and limited the context to 8k to save on compute during the big release. So they're both the same model, but if anything the 8k context would not be "vanilla GPT-4".

In case there are versions with different context windows, what does this resolve?

predictedNO

O(N**2) says No.

256x more compute for the same parameter count.

Longformer/BigBird/Memory involve modifications; more likely to see these and high parameter count

predictedYES

@Gigacasting It will 100% have 32k token length, there’s no doubt about it. Already DV in OpenAI Foundry has a 32k variant. This market can only resolve YES or N/A (if GPT-4 is not out by the end of the year).

© Manifold Markets, Inc.TermsPrivacy