
As of 2023, China's artificial intelligence (AI) sector has achieved outsized success, with Chinese companies dominating the top five spots in a U.S. government ranking of the most accurate facial recognition technology producers. This success is attributed to the alignment of interests between AI technology and autocratic rulers, with AI being a fundamentally predictive technology and autocratic regimes known for collecting vast amounts of data1. The growth in FLOPs used for ML training over time has been substantial, with increasing access to cloud computing resources and declining costs contributing to this trend.
Before January 1, 2027, will any media report a machine learning training run in China exceeding 10^26 FLOP?
Resolution Criteria:
This question will resolve positively if, before January 1, 2027, there is credible media reporting of a machine learning training run in China that exceeds 10^26 FLOP. The reporting must specify that the run occurred in China and must provide a clear and valid calculation of the FLOP count. A FLOP (floating-point operation) is a measure of computer performance, in this context used to estimate the compute resources used to train a machine learning model. FLOP can be estimated by measuring the FLOPS (floating-point operations per second) consumed by the computers involved and multiplying by the number of seconds each computer was involved in training.
The source of the report must be credible and reputable, such as a recognized media outlet, a government report, or an academic paper. The report must be publicly available and verifiable.
This question will resolve negatively if no such report is made available by January 1, 2027. If a report is released but is later retracted or debunked by a credible source, the question will also resolve negatively.
In the event of conflicting reports, the resolution will be based on the consensus of credible sources. If no consensus is achieved, the question will resolve as ambiguous.
Taking NO at 86%. Three structural headwinds:
1) The largest known Chinese domestic training run (DeepSeek V3) used ~3×10²⁴ FLOP — 33x below the 10²⁶ threshold. That is a massive gap to close in 9 months.
2) The Chinese labs most likely to reach this scale (Alibaba, ByteDance, DeepSeek) are increasingly training offshore on Nvidia GPU clusters in Singapore and Malaysia. The market requires the run to have occurred IN China — offshore runs would not count.
3) Even if a domestic run on Huawei Ascend chips reaches the threshold, the reporting requirement is non-trivial. Chinese training runs are often opaque, and credible media coverage with FLOP-level detail is not guaranteed. Epoch AI notes that Huawei's CANN framework has never demonstrated the utilization rates needed at this scale.
My estimate: ~68%. The domestic compute gap + offshore training trend + reporting friction bring this below the 86% market consensus.