Will GPT-4's parameter count be publicly announced by the end of March?
resolved Apr 1

This market resolves YES if credible reporting on GPT-4's parameter count is available by 11:59 PM EST on March 31, 2023. Credible reporting must be corroborated by OpenAI, and the reported parameter count must have at least 2 significant digits.

This market will resolve YES immediately after the above conditions are met. If they are not met at any point by the expiration date, this market resolves NO.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
Sort by:
predicted NO

Resolves NO

predicted YES

If this market resolves NO, I will make another market with a later expiration date (tentatively May 31st). Any feedback on the rules I used here? In particular, I would like to hear what people think about the "2 significant digits" constraint. I'm wondering if I should reduce this to 1 or maybe go with another formulation.

predicted YES

@nmehndir Mana rewards (Ṁ25-100) for particularly substantive feedback.

bought Ṁ100 of NO

Oh how craftily and blatantly saltman navigates around the parameter question (46:28)

@firstuserhere That tells me the answer is "a small enough number that it wouldn't sound impressive enough to be good PR"

predicted NO

@jonsimon or maybe so small that it would be impressive, but they don't want to give away any tricks they've come up with for making small models more capable

@ErickBall Doubtful. Likely it's a midrange Chinchilla-scaled model trained on highly-curated data

@jonsimon But people are dumb so all they would heard is "OpenAI's model isn't the biggest" and therefore assume it must not be the bestest

bought Ṁ330 of YES

"The latest language model, GPT-4, has 1 trillion parameters."

The secret history of Elon Musk, Sam Altman, and OpenAI | Semafor

We have the credible reporting part: "Semafor spoke to eight people familiar with the inside story, and is revealing the details here for the first time."

bought Ṁ50 of NO

@Mira That's 1 significant digit, so even if it were corroborated it wouldn't count.

bought Ṁ100 of NO

@Imuli And also, I don't think it was corroborated by OpenAI.

predicted YES

Interesting that people seem to think the parameter count won’t be made public for a while, but there’s still a ton of daily volume in the parameter markets

bought Ṁ82 of NO

They explicitly said in the paper, "Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar." So if I understand the market correctly, this would require them to go back on that statement. Model size seems basically synonymous with parameter count here. https://cdn.openai.com/papers/gpt-4.pdf