Most training run compute greater than 2e27 FLOP by EOY 2026?
5
1kṀ2363
Dec 31
92%
chance

see https://epoch.ai/trends#compute. At market creation, the most recent update put grok 3 at 5e26 FLOP.

This market will resolve to epoch's estimate if it is available at time of market close. If unavailable, the most sensible estimate at that time will be used.

  • Update 2025-12-29 (PST) (AI summary of creator comment): Distributed training runs are not relevant to this market at 2e27 FLOP - only centralized training runs count.

Market context
Get
Ṁ1,000
to start trading!
Sort by:

@Dulaman that is not relevant to this market

@Bayesian I disagree. That curve can do all sorts of interesting things in 2026

@Dulaman this is completely implausible. they are 1000x behind at least and that doesn't even include the interconnect disadvantage. it's completely incomparable

default trend is scenario B, but if we're in scenario A, then distributed training could catch up to "top 1 centralised" before EOY 2026

@Bayesian you're thinking about it wrong

@Dulaman no you are, this is pure slop

@Bayesian Ok then, what do you think this curve will look like over the next 12, 24 months?

@Dulaman not at 2e27 FLOP

I suspect we're going to get a 10^26 FLOP distributed run by 2027. 10^27 seems less likely, but I could see interesting things happening with the open source models in China, if they start doing massive distributed training runs between companies

like if the technology is there to handle a run that large, and they're all releasing open source models anyway, why wouldn't they pool their resources together?

the "interconnect disadvantage" isn't an argument against large FLOP runs, if anything the opposite is true. You want to burn more compute to overcome the inefficiencies caused by diloco etc

@Dulaman idk this doesnt seem relevant to this market but i dont wanna argue further

@Bayesian fine by me. Sorry if I'm misunderstanding some emotion or subtext in your reaction

© Manifold Markets, Inc.TermsPrivacy