$1T AI training cluster before 2031?
➕
Plus
21
Ṁ6501
2031
73%
chance

In "Situational Awareness: The Decade Ahead", Leopold Aschenbrenner estimates that the largest training clusters will cost over one trillion dollars around 2030.

Clarifications:

  • $1T worth of computers and associated data center infrastructure (e.g. building, cooling, networking; does not include the cost of the power plant)

  • Computers must be networked together and training one model. They do not need to synchronize weights at each gradient step.

  • Value of data centers will be estimated with reasonable depreciation. So, $1T of purchase price in TPUv1s would not count.

  • Nominal dollars.

  • So, if the nominal value of the datacenter being used to train a model less depreciation ever crosses $1T, this market resolves YES.

This is one of a series of markets on claims made in Leopold Aschenbrenner's Situational Awareness report(s).

Other markets about Leopold's predictions:

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ50 NO

Considering how quickly GPUs are improving I'm guessing the training FLOPs vs intelligence curve would be basically flat well before you would reach $1 trillion in 2031

Edited title to match language of other [LA:SA] markets. (No change to meaning.)

If Google has spent 1T on all of their tpus and do distributed training does that count?

Or does this require a single data center?

@wrhall I’ll count “$1T worth of computers, networked together, training one model”. A few details:

  • Value of data centers will be estimated with reasonable depreciation. So, $1T of purchase price in TPUv1s would not count.

  • $1T nominal dollars

opened a Ṁ200 YES at 30% order

@Tossup Seems fair! I don't know where they'll get the money (how big is their war chest?), but if you count 2 generations of tpus it feels plausible. Not sure if we will be able to get accurate accounting from them in 2030, but let's see

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules