Will a LLM trained with FP4 have competitive performance in 2 years time?
16
1kṀ2955
resolved Mar 4
Resolved
NO

"Currently, the technology for 4-bit training does not exists, but research looks promising and I expect the first high performance FP4 Large Language Model (LLM) with competitive predictive performance to be trained in 1-2 years time." (see: https://timdettmers.com/2023/01/16/which-gpu-for-deep-learning/)

Granted, the model must be open source for us to know, so the market will resolve based on publicly available information.

Get
Ṁ1,000
to start trading!

🏅 Top traders

#NameTotal profit
1Ṁ158
2Ṁ152
3Ṁ90
4Ṁ55
5Ṁ13
Sort by:

@typedfemale Hi! Would you mind resolving this market?

@Gabrielle @Bayesian we got a lesson about this in Discord, here

My executive summary:

  • There's some evidence that FP4 is 'around the corner' and may demonstrate some of these qualities

But it's not enough to qualify for the market's criteria of:

  • Open source

  • Publicly available information

  • As of 21 January 2025

If someone disagrees with the way I'm spinning the summary of the conversation, post here!

@Eliza it should resolve NO

@PoliticalEconomyPK Well, I agree, but do we need to wait for a panel of 3 moderators to weigh in? I tried to wrangle some with no luck and it's been some weeks by now.

According to the linked analysis from sof, this simply did not happen. It doesn't sound like an ambiguity or a judgement call, but just "did anyone refute this analysis" (no).

So, let's resolve it no as "the resolution is obvious" rather than "ambiguous".

@Manifold

@mods

@ManifoldAI

Any AI expert can chime in and resolve this market? According to a prompt on chatGPT that I made, this should resolve no

predictedNO

Exclusively in FP4? Or does partially in FP4 count. What if the model is on average 60% FP4 over the course of training?

I guess you covered this with "trained in 4-bit (to some extent)"

predictedNO

@NoaNabeshima This is ab post-training precision adjustments

Competitive with what? SOTA with fp16?

predictedNO

This seems important @typedfemale
Will this resolve YES if scaling laws suggest a 4-bit model would be competitive if compute-matched to a SOTA 16-bit model?

predictedNO

(but there isn't a trained SOTA 4-bit model)

@NoaNabeshima Yes, you need to be better than everything else, but be trained in 4-bit (to some extent)

@typedfemale Finetuned w 4 bit would trigger Yes? 80% of parameters in 4 bit would trigger Yes?

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules