What will be true of OpenAI’s open-weight model?
20
1.2kṀ1717
Dec 31
84%
Is a reasoning model
38%
Is at least an Image-Language Model (ala JanusPro or 4o)
60%
Is at least a Vision-Language Model (ala PaliGemma)
73%
Uses Beyer Teacher/Student distillation ala Gemma 3
30%
my friend @soaffine (who knows ab ai) thinks it unveils an impressive new architecture
70%
An instruct model is made accessible
14%
A base model is made accessible
31%
The model is accessible to all users on the main chatgpt site
26%
Gets 1400+ Elo on lmarena.ai
21%
At least 300B parameters
35%
At least 70B parameters
79%
At least 30B parameters
26%
better than o3-mini on FrontierMath (with tools)
30%
Uses Mixture of Experts
53%
MIT or Apache license

Context:

Feel free to add your own answers. If an answer is unclear you are welcome to ask for clarification from both me and the person who submitted the answer.

Get
Ṁ1,000
to start trading!
Sort by:

"reasoning model" will be judged with reference to OpenAI documentation.

bought Ṁ10 YES

What’s the difference between image/language and vision/language?

bought Ṁ10 NO

@KJW_01294 I think image/language generates images whereas vision/language only takes them as input

I've chosen to bet on the market that references me, but assert that I will be truthful in my evaluation of it. To help others, I would have said no to Gemma 2/3, yes to PaliGemma, yes to DeepSeek-MoE, but no to DeepSeek V3 and DeepSeek R1

bought Ṁ10 YES

"parameters" means total parameters?

bought Ṁ30 NO

@JoshYou correct

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules