Will GPT-4's parameter count be publicly announced by the end of March?
4%
chance

This market resolves YES if credible reporting on GPT-4's parameter count is available by 11:59 PM EST on March 31, 2023. Credible reporting must be corroborated by OpenAI, and the reported parameter count must have at least 2 significant digits.

This market will resolve YES immediately after the above conditions are met. If they are not met at any point by the expiration date, this market resolves NO.

Sort by:
nmehndir avatar
nmehndiris predicting YES at 5%

If this market resolves NO, I will make another market with a later expiration date (tentatively May 31st). Any feedback on the rules I used here? In particular, I would like to hear what people think about the "2 significant digits" constraint. I'm wondering if I should reduce this to 1 or maybe go with another formulation.

nmehndir avatar
nmehndiris predicting YES at 5%

@nmehndir Mana rewards (Ṁ25-100) for particularly substantive feedback.

firstuserhere avatar
firstuserherebought Ṁ100 of NO

Oh how craftily and blatantly saltman navigates around the parameter question (46:28)

jonsimon avatar
Jon Simon

@firstuserhere That tells me the answer is "a small enough number that it wouldn't sound impressive enough to be good PR"

ErickBall avatar
Erick Ballis predicting NO at 10%

@jonsimon or maybe so small that it would be impressive, but they don't want to give away any tricks they've come up with for making small models more capable

jonsimon avatar
Jon Simon

@ErickBall Doubtful. Likely it's a midrange Chinchilla-scaled model trained on highly-curated data

jonsimon avatar
Jon Simon

@jonsimon But people are dumb so all they would heard is "OpenAI's model isn't the biggest" and therefore assume it must not be the bestest

Mira avatar
Mirabought Ṁ330 of YES

"The latest language model, GPT-4, has 1 trillion parameters."

The secret history of Elon Musk, Sam Altman, and OpenAI | Semafor

We have the credible reporting part: "Semafor spoke to eight people familiar with the inside story, and is revealing the details here for the first time."

Imuli avatar
Imulibought Ṁ50 of NO

@Mira That's 1 significant digit, so even if it were corroborated it wouldn't count.

ValeryCherepanov avatar
Valery Cherepanovbought Ṁ100 of NO

@Imuli And also, I don't think it was corroborated by OpenAI.

nmehndir avatar
nmehndiris predicting YES at 13%

Interesting that people seem to think the parameter count won’t be made public for a while, but there’s still a ton of daily volume in the parameter markets

JacyAnthis avatar
Jacy Reese Anthisbought Ṁ82 of NO

They explicitly said in the paper, "Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar." So if I understand the market correctly, this would require them to go back on that statement. Model size seems basically synonymous with parameter count here. https://cdn.openai.com/papers/gpt-4.pdf