@teortaxesTex' DeepSeek V4 predictions thread
2
1.9kṀ1235Nov 1
40%
>=1.5T parameters
59%
>=52B active parameters
61%
>=25T pretraining tokens
46%
uses some non-AdamW optimizer
35%
DS-MoE with adaptative expert count
41%
intra-expert communication
51%
>=512 experts
52%
>=16 active experts
59%
>= 2 shared experts
66%
Some variation of NSA (Native Sparse Attention)
41%
1M+ Context
26%
Gemini 2.5 Pro tier or higher on FictionBench (90.6%+ at 192k)
15%
>= 44% on Humanity's Last Exam (text only) at scale.com leaderboard
16%
>= 73% on SWE-Bench Verified (according to epoch.ai)
20%
>= 60% on BrowseComp (https://www.kaggle.com/benchmarks/openai/browsecomp)
26%
>= 50% on TerminalBench (https://www.tbench.ai/leaderboard)
39%
Some image input (multimodality)
69%
Releases before November
20%
DeepSeek reports some results with a full-blown deep research agent, and emphasizes that this is the intended use-mode
Teortaxes gave some point estimates. These are not as amenable to prediction market forecasting so I turned them into over/under forecasts. I may add forecasts from other commenters in the thread later on, so these may not only be forecasts by Teo
See post for more (including forecasts I wasn't able to turn into market options):
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
When will DeepSeek release V4?
Will DeepSeek's next reasoning model be called R3?
1% chance
will deepseek-v4 destroy all other models?
14% chance
When will Deepseek V4 be released?
11/6/25
When will DeepSeek release R2?
Will DeepSeek's next reasoning model be open-sourced?
81% chance
DeepSeek open-source frontier model after 3/23/26?
59% chance
When will Deepseek R2 be released?
3/26/26
Will DeepSeek R2 be open source?
77% chance
Did DeepSeek receive unannounced assistance from OpenAI in the creation of their v3 model?
8% chance