Open numeric answer for number of GPT-4 parameters. Market will resolve 100B if there's fewer than 100B, and 100T if there's more than 100T.
GPT-3 was 175 billion. There have been rumors that GPT-4 will be much bigger.
Similar markets:
https://manifold.markets/MaxGhenis/will-gpt4-have-at-least-100-trillio (requires >100T, which is currently unlikely)
https://manifold.markets/JustinTorre/how-many-parameters-with-gpt4-have (ranges)
@DanMan314 Yeah, it looks definitely wrong. The people with the most profit all bought 'Lower' on high values. I'll unresolve it for now?
I unresolved a second time and re-resolved a third time. This time I am more confident in the result.
This is an 'old' Numeric market where it is just a skin over a regular binary market. There are two factors used to resolve it, one is a value
, but this is literally just a number used for display purposes. The other is probabilityInt
. The API docs say to calculate it this way:
probabilityInt
: Required ifvalue
is present. Should be equal to
If log scale:
log10(value - min + 1) / log10(max - min + 1)
So I tried using a figure like 0.87xxx., but apparently it wanted 87.xxx instead. Now it looks much better.
Notice how everyone who played got back all their money and then like 5 extra mana or 5 less mana? I guess that's why they deprecated this market type 😂
@mods It looks like this needs admin intervention, it's not possible to resolve numeric markets to large values
@andrew I tried to resolve it with the API. It appears visually to have worked, but someone needs to see if it actually paid out the correct amounts. If there is an issue, we can unresolve it and wait for Ian to look at it.
@andrew It's not accidental. He clearly says:
> The latest, the state-of-the-art OpenAI model, is approximately 1.8 trillion parameters.
Calling it GPT-MoE-1.8T reoccurs multiple times throughout the presentation. Jensen knows what he's talking about, they are the ones providing the hardware for OpenAI.
Source
I think the only uncertainty here is exactly which model he is talking about, but I'd say it's pretty safe to resolve this.
The other nuance is how to deal with the fact that it's MoE. If each of the experts have 200 billion parameters, does this resolve to that?
@Shump Yeah — agreed he knows what he's saying. Question is whether "the latest" is the one released (GPT-4) or the next. Unless consensus here disagrees strongly.
As for how to resolve, I lean pretty solidly towards counting all trained parameters. The result is most clearly not a 200b model — it's a stack of 8 of them, each needing to be trained, and each being used at runtime. The fact that only a subset of experts get activated for a specific token's evaluation doesn't really change that.
Rumors have been coming out, will wait for more confirmation. But looks like the market adjusted.
https://twitter.com/soumithchintala/status/1671267150101721090
@jonsimon Well yes, GPT4 params will probably not release within any reasonable time frame, but it's pretty likely more than 200B lol
@ShadowyZephyr It depend strongly on whether they incorporated chinchilla scaling laws. If they did then 200B is quite plausible
Oh, he says now it was "a figure of speech" https://twitter.com/SebastienBubeck/status/1644151579723825154
So it's pretty weak evidence but still (also I cannot delete previous comment anyway).