Will GPT-5 have over 1 trillion parameters?
45
376
850
2030
87%
chance

This market resolves YES if the initial GPT-5 model, when released by OpenAI, was trained with over 1 trillion parameters, otherwise it will resolve NO.

If OpenAI ceases to exist, publicly confirms there will not be a GPT-5, or it hasn't been released by 2030, this market resolves NA. If OpenAI is acquired or merges, "OpenAI" refers to the new company.

"Initial GPT-5 model" refers to the first release of GPT-5 by OpenAI, not including subsequent updates, variants, or distillations:

  1. Distillations: If GPT-5 has variants that are distilled into having fewer parameters, these do not affect the market resolution.

  2. Subsequent variants: If new variants in the GPT-5 series are trained with more parameters after the initial release, these do not count toward the resolution.

  3. Naming: GPT-5 must be publicly recognized as such by OpenAI. If it is called "GPT-5" internally, but has a public name like GPT-4.5, it doesn't count.

  4. If OpenAI trains multiple GPT-5 variants but staggers their releases, any of them is acceptable as long as they are announced together.

Confirmation of the number of parameters in the initial GPT-5 model must come from an official source, such as:

  1. OpenAI or Microsoft

  2. Current or former executives of OpenAI or Microsoft

  3. Current or former employees of OpenAI or Microsoft directly involved in the development of GPT-5

A journalist can serve as an acceptable proxy for an official source, provided that:

  1. The information clearly originates from an official source.

  2. The information is not based on rumors or unofficial statements.

Get Ṁ200 play money
Sort by:

@Mira
How do you count the number of params in a MoE?

@FergusArgyll Technically this market resolves on reporting, so I don't. Some journalist or OpenAI employee decides how they count parameters.

But I would expect them to be using some form of gradient descent, which modifies a bunch of floating point numbers. And "the quantity of numbers updated by an optimizer that varies during training and isn't erased for inference but before distillation" is what I would expect them to report. So ADAM optimizer parameters like momentum shouldn't be parameters even though they vary. The embedding space counts too, but the # of tokens wouldn't count unless that's a dynamic parameter updating during training too.

If they do something like "split floating point numbers into 16/32/64-wide binary and report bits independently", I might have to translate that to parameters. But unless the translation is trivial, I would probably count each bit separately(if they don't use the word "parameters" in any journal reporting or any OpenAI articles, which takes precedence over my own interpretation).

bought Ṁ89 of YES

I hope you're not really going to wait until 2031 to close this

bought Ṁ0 of YES

@R2D2 If the number of parameters is confirmed earlier, I'll resolve early. If no information comes out until 2030, I'll leave it open until then.

The longer timescale is because with GPT-4 they announced they aren't sharing architectural details, so it may take longer for that information to get out.

Loans make every market more than a month out equivalent in time value anyways, especially if NA would be the resolution with lack of information.

Comment hidden

More related questions