
If LMs store info as features in superposition, does # features scale superlinearly with number of model parameters?
If LMs store info as features in superposition, does # features scale superlinearly with number of model parameters?
2
70Ṁ19Dec 31
41%
chance
1H
6H
1D
1W
1M
ALL
Will clarify operationalization later, bet at your own risk.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
If LMs store info as features in superposition, are there >300K features in GPT-2 small L7? (see desc)
59% chance
Will the best LLM in 2027 have <1 trillion parameters?
26% chance
Will the best LLM in 2026 have <1 trillion parameters?
40% chance
Will the best LLM in 2025 have <1 trillion parameters?
42% chance
Will the best LLM in 2025 have <500 billion parameters?
24% chance
Will the best LLM in 2027 have <500 billion parameters?
13% chance
Will the best LLM in 2026 have <500 billion parameters?
27% chance
Will the best LLM in 2027 have <250 billion parameters?
12% chance
Which LLM has more parameters?