Distributed LLM by 2025
4
116
110
resolved Dec 25
Resolved
N/A

Get Ṁ200 play money
Sort by:
bought Ṁ100 of YES

From last week,

" Distributed Inference and Fine-tuning of Large Language Models Over The Internet "

How this relates to the market?

I will quote Jack Clark of Anthropic:

Distributed inference makes decentralized AI easier: Most of AI policy rests on assumption that AI will be centralized - training will be done on massive supercomputers and the resulting large-scale models will be served by big blobs of computers connected to one another via dense networks. Here, PETALS shows that the latter assumption could be false - big models may instead be served by ad-hoc collections of heterogeneous hardware communicating over standard lossy network connections. And since PETALS works for fine-tuning as well, it also suggests model adaption is going to be an increasingly decentralized and therefore hard-to-control process. 

bought Ṁ35 of NO

@firstuserhere interpreting gigcasting resolution criteria is possibly more error prone than reading tea leaves, but I interpret his below linked tweet to imply this market is about training, not inference. Do you disagree?

bought Ṁ60 of YES

@RobertCousineau The below tweet shows that the proof of concept for training is possible. Then we need to scale and improve the quality, and I think we're there now. FInally, Gigacasting says "in the zone of GPT-4". Assuming he means qualitatively, we're not there yet, so 1 more year to get there.

Since the market could resolve either way, and without gigacasting around to help narrow down by eliminating confusing bits of information, I suggest we n/a and move on and create a better defined market on the same.

predicted YES

@EvanDaniel you're online so pinging you, thoughts? there's only 2 market participants as of now, who use manifold haha. And both of us just bet in the last 5 mins. Should we N/A this?

bought Ṁ0 of NO

@firstuserhere yeah, when I made that first no bet I was internally saying "it is very unlikely that we trying a model at the level of GPT-4 with a novel [as I guessed it to be] distributed training method."

Your link does show it is not as novel as I thought (although from the first part it is saying it will still be trained on centralized supercomputers).

I'm quite comfortable with an N/A.

predicted NO

@firstuserhere if you make a new market with decentish resolution criteria, I'll (foolishly) throw up some limit orders.

@firstuserhere I'm a fan of N/A and make some good markets.

My best guess about how to interpret this is something like "GPT-4 class or better thing actually gets trained in a distributed way before 2025". But Gigacasting markets are all a mess and I don't know if that's the "right" interpretation.

I'll go ahead and N/A. Thanks for the ping!

predicted YES
predicted NO

@firstuserhere limits set. If you disagree strongly with them, I'm happy to make bugger bets after a little more deliberation.

Is this for training or inference?

predicted NO

And in the zone of GPT-4

bought Ṁ1 of NO