Will growth in the maximum MW used to train AIs, slow down by more than x2 after GPT-6-like?
Basic
3
Ṁ50
2040
60%
chance
  • The maximum MW (power) used to train SOTA AIs (e.g., Frontier Models)

    • Accounting for data center overheads.

    • E.g., the total power supply needed by all the data centers used to train the most power-hungry frontier model.

    • The AI must be a real model with SOTA capabilities, not a toy demonstration with a very short training time.

    • If needed, this will be approximated using "an approximate number of GPUs" x "GPU Power consumption" x 2.

    • We will use the average MW power supply over the full pre-training.

  • I will compare:

    • W_Growth_Pre_GTP_6 = 10 / (t(GPT-6) - t(GPT-5)) to

    • W_Growth_Post_GTP_6 =10 / (t(GPT-7) - t(GPT-6))

  • I will resolve to YES, if W_Growth_Post_GTP_6 < W_Growth_Pre_GTP_6 /2.

    • This is equivalent to having (t(GPT-7) - t(GPT-6)) being more than twice (t(GPT-6) - t(GPT-5))

    • t(X) stands for the time of pick power training of the AI X

  • GPT-5 is a placeholder for the first AI system trained to use approximately 10 times as much power as GPT-4 by OpenAI (as initially trained).

  • GPT-6 is a placeholder for the first AI system trained to use approximately 10 times as much power as the GPT-5 placeholder and approximately 100 as much as GPT-4 by OpenAI.
    GPT-7 is a placeholder for the first AI system trained to use approximately 10 times as much power as the GPT-6 placeholder, approximately 100 as much as the GPT-5 placeholder and approximately 1000 as much as GPT-4.

  • Resolve as soon as information about the GPT-5, GPT-6 and GPT-7 placeholders are significantly robust.

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules