Will a GPT-4 level system be trained for <$1mm by 2030?
Plus
20
Ṁ17672030
87%
chance
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
@firstuserhere so by moores law gains alone you would expect 16 fold reduction, bring you down to 6 million. There are also chip architecture gains not just from adding transistor, training efficiency gains, finding ways to filter the training data to not waste compute on worthless low information examples. (For example trying to memorize hashes or public keys that happen to be in the training set).
Also if the gpt-4 source is similar to the gpt-3 source it's a tiny python program of a few thousand lines. Open source versions exist and over the next 7 years many innovations will be found that weren't available to OAI.
Related questions
Related questions
GPT-5 by 2025?
7% chance
Will a GPT-3 quality model be trained for under $10.000 by 2030?
82% chance
Will a GPT-4 quality model be trained for under $10.000 by 2030?
78% chance
Will a GPT-4 level system be trained for <$1mm by 2028?
89% chance
Will a GPT-3 quality model be trained for under $1,000 by 2030?
76% chance
Will there be an LLM (as good as GPT-4) that was trained with 1/100th the energy consumed to train GPT-4, by 2026?
53% chance
Will it cost $30 to train a GPT-3 level model in 2030?
19% chance
100GW AI training run before 2031?
53% chance
Is GPT-4 best? (Thru 2025)
52% chance
Will any open-source model achieve GPT-4 level performance on MMLU through 2024?
83% chance