"Large models aren’t more capable in the long run if we can iterate faster on small models" within five years
Basic
3
Ṁ120
2025
18%
chance

LoRA updates are very cheap to produce (~$100) for the most popular model sizes. This means that almost anyone with an idea can generate one and distribute it. Training times under a day are the norm. At that pace, it doesn’t take long before the cumulative effect of all of these fine-tunings overcomes starting off at a size disadvantage. Indeed, in terms of engineer-hours, the pace of improvement from these models vastly outstrips what we can do with our largest variants, and the best are already largely indistinguishable from ChatGPT. Focusing on maintaining some of the largest models on the planet actually puts us at a disadvantage.

Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.

Resolves to my subjective opinion of the consensus. (I don't have any better ideas)

(If you think this market is poorly written/specified/has a mistake, tell me, and I may edit it within the first couple days)

Operationalizing a claim from https://www.semianalysis.com/p/google-we-have-no-moat-and-neither (hacker news comments)

Get
Ṁ1,000
and
S3.00
Sort by:

sister markets - The gap will close between the quality of open source language models and Google's internal language models in two years / five years
"Large models aren’t more capable in the long run if we can iterate faster on small models" within two years / five years

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules