
When OpenAI announces GPT-5, this market resolves to the largest context size measured in tokens that they announce support for.
GPT-3: 2048 tokens
GPT-3.5: 4096
GPT-4: 8k, 32k
GPT-5: ???
Anthropic's Claude announced a 100k variant, there are rumors of upcoming 1 million context size models, and surely OpenAI would want the most impressive-sounding model on release.
In the unexpected case they don't mention a specific context size or their architecture is changed so fixed context sizes no longer make sense, I'll wait until I have access and test its recall using very large documents.
If the largest context size isn't on this table, then this market resolves to a weighting of the surrounding entries. k is a multiplier of size 1024. GPT-4 would resolve "32k". Claude would resolve "log2(100k) = 16.61, so 2^16 = 64k would get weight 39% and 2^17 = 128k would get weight 61%".