Will growth in the maximum MW used to train AIs, slow down by more than x2 after GPT-5-like?
Basic
1
Ṁ10
2030
55%
chance
  • The maximum MW (power) used to train SOTA AIs (e.g., Frontier Models)

    • Accounting for data center overheads.

    • E.g., the total power supply needed by all the data centers used to train the most power-hungry frontier model.

    • The AI must be a real model with SOTA capabilities, not a toy demonstration with a very short training time.

    • If needed, this will be approximated using "an approximate number of GPUs" x "GPU Power consumption" x 2.

    • We will use the average MW power supply over the full pre-training.

  • I will compare:

    • W_Growth_Pre_GTP_5 = 10 / (t(GPT-5) - t(GPT-4)) to

    • W_Growth_Post_GTP_5 =10 / (t(GPT-6) - t(GPT-5))

  • I will resolve to YES, if W_Growth_Post_GTP_5 < W_Growth_Pre_GTP_5 /2.

    • This is equivalent to having (t(GPT-6) - t(GPT-5)) being more than twice (t(GPT-5) - t(GPT-4))

    • t(X) stands for the time of pick power training of the AI X

  • GPT-5 is a placeholder for the first AI system trained to use approximately 10 times as much power as GPT-4 by OpenAI (as initially trained).

  • GPT-6 is a placeholder for the first AI system trained to use approximately 10 times as much power as the GPT-5 placeholder and approximately 100 as much as GPT-4 by OpenAI.

  • Resolve as soon as information about the GPT-5 placeholder and GPT-6 placeholder are significantly robust.

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules