Will an LLM with less than 10B parameters beat GPT4 by EOY 2025?
8
39
260
2026
84%
chance

How much juice is left in 10B parameters?

the original GPT4-0314 (ELO 1188)

judged by Lmsys Arena Leaderboard

current SoTA: Llama 3 8B instruct (1147)

maybe Phi-3 small?

Get Ṁ200 play money
Sort by:
bought Ṁ10 YES

Open source models have an inherent advantage in that they don't need to conform to censorship/"safety" policies, making them much more useful and thus getting them a higher ELO. Having to conform to policies is a major handicap for the larger closed-source models.

@singer There is a without refusal board on lmsys. The disparity between main and that is how much advantages not having censorship’s give u