
Will any LLM have a context window of at least 1 million characters by the end of 2028?
18
350Ṁ2670resolved May 11
Resolved
YES1H
6H
1D
1W
1M
ALL
Using characters instead of tokens because token size can be changed, and characters are what humans actually care about. If they advertise a context window in tokens, I'll convert it to characters at the average rate of that tokenizer on representative human text.
Something "cheaty" doesn't count, it has to be, say, at least as smart as GPT-3 on similar inputs.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ149 | |
2 | Ṁ59 | |
3 | Ṁ24 | |
4 | Ṁ24 | |
5 | Ṁ8 |
People are also trading
Related questions
Will an LLM consistently create 5x5 word squares by 2026?
84% chance
Will the best LLM in 2027 have <1 trillion parameters?
26% chance
Will there be major breakthrough in LLM Continual Learning before 2026?
25% chance
Will the best LLM in 2025 have <500 billion parameters?
23% chance
Will the best LLM in 2026 have <1 trillion parameters?
40% chance
Will the best LLM in 2025 have <1 trillion parameters?
42% chance
Will LLMs become a ubiquitous part of everyday life by June 2026?
82% chance
Will an LLM do a task that the user hadn't requested in a notable way before 2026?
92% chance
Will the best LLM in 2027 have <250 billion parameters?
12% chance
Will the best LLM in 2027 have <500 billion parameters?
13% chance