At least one of the most powerful neural nets at end of 2030 will be trained using 10^27 FLOPs
6
61
230
2031
90%
chance

Resolves YES if at least one of the most powerfull neural nets publicly known to exist by end of 2030 was trained using at least 10^27 FLOPs. This is ~30 exaFLOP/s years. It does not matter if the compute is distributed, as long as one of the largest models used it. A neural net which uses 10^27 FLOPs but is inferior to other models does not count. Low precision floating point such as fp32, fp16, or fp8 is permitted.

Resolves NO if no such model exists by end of 2030.

If we have no good estimates of training compute usage of top models, resolves N/A.

Get Ṁ200 play money

More related questions