Will a GPT-4 level system be trained for <$1mm by 2030?
16
39
Ṁ1.4KṀ310
2030
88%
chance
1D
1W
1M
ALL
Get Ṁ200 play money
Sort by:
@firstuserhere so by moores law gains alone you would expect 16 fold reduction, bring you down to 6 million. There are also chip architecture gains not just from adding transistor, training efficiency gains, finding ways to filter the training data to not waste compute on worthless low information examples. (For example trying to memorize hashes or public keys that happen to be in the training set).
Also if the gpt-4 source is similar to the gpt-3 source it's a tiny python program of a few thousand lines. Open source versions exist and over the next 7 years many innovations will be found that weren't available to OAI.
Related questions
Will GPT-4 be trained on more than 10T text tokens?
35% chance
Will GPT-4 be trained (roughly) compute-optimally using the best-known scaling laws at the time?
30% chance
Will GPT-4's parameter count be known by end of 2024?
42% chance
Will there be an LLM (as good as GPT-4) that was trained with 1/100th the energy consumed to train GPT-4, by 2026?
48% chance
Is GPT-4 best? (Thru 2025)
52% chance
Will mechanistic interpretability be essentially solved for GPT-2 before 2030?
19% chance
Will the estimated training cost of GPT-4 be over $50M?
96% chance
Will GPT-5 be capable of some form of online learning?
27% chance
Will a language model comparable to GPT-4 be trained, with ~1/10th the amount of energy it took train GPT-4, by 2028?
78% chance
Was GPT-4 trained in 4 months or less?
59% chance