
Will a LLM trained with FP4 have competitive performance in 2 years time?
16
1kṀ2955resolved Mar 4
Resolved
NO1H
6H
1D
1W
1M
ALL
"Currently, the technology for 4-bit training does not exists, but research looks promising and I expect the first high performance FP4 Large Language Model (LLM) with competitive predictive performance to be trained in 1-2 years time." (see: https://timdettmers.com/2023/01/16/which-gpu-for-deep-learning/)
Granted, the model must be open source for us to know, so the market will resolve based on publicly available information.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ158 | |
2 | Ṁ152 | |
3 | Ṁ90 | |
4 | Ṁ55 | |
5 | Ṁ33 |
People are also trading
Related questions
Will a LLM trained with FP4 have frontier-level performance before 2028?
31% chance
Will an LLM improve its own ability along some important metric well beyond the best trained LLMs before 2026?
50% chance
Will one of the major LLMs be capable of continual lifelong learning (learning from inference runs) by EOY 2025?
26% chance
Will a publicly-available LLM achieve gold on IMO before 2026?
30% chance
Will LLMs mostly overcome the Reversal Curse by the end of 2025?
72% chance
Will there be major breakthrough in LLM Continual Learning before 2026?
25% chance
Will China have the best LLM by the end of 2025?
14% chance
Will Apple release its own LLM on par with state of the art LLMs before 2026?
7% chance
Will a majority of Harvard undergrads train an LLM within 3 years as per Altman?
4% chance
Will LLM training costs fall 100x by 2028?
92% chance