Will we have any progress on the interpretability of State Space Model LLM’s in 2024?
14
1kṀ2961resolved Dec 28
Resolved
YES1H
6H
1D
1W
1M
ALL
State Space Models like Mamba introduce new possibilities, as States are a new object type, a compressed snapshot of a mind at a point in time which can be saved, restored, and interpreted. But a cursory search didn’t turn up any work on interpreting either States or State Space Models.
This resolves Yes if research comes out that makes any significant interpretability progress into a state space large language model. I will not bet on this market.
Update 2024-27-12 (PST): - Clarification:
The work 2404.05971v1 qualifies as making significant interpretability progress into a state space large language model. (AI summary of creator comment)
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ226 | |
2 | Ṁ152 | |
3 | Ṁ86 | |
4 | Ṁ20 | |
5 | Ṁ8 |
People are also trading
Related questions
Will an LLM Built on a State Space Model Architecture Have Been SOTA at any Point before EOY 2027? [READ DESCRIPTION]
43% chance
Will mechanistic interpretability have more academic impact than representation engineering by the end of 2025?
72% chance
Will interpretability be commonplace in physics papers relying on machine learning by the end of 2025?
10% chance
Will LLMs be able to formally verify non-trivial programs by the end of 2025?
30% chance
Will the state-of-the-art AI model use latent space to reason by 2026?
19% chance
Will we get a new LLM paradigm by EOY?
32% chance
Will a lab train a >=1e26 FLOP state space model before the end of 2025?
15% chance
[Situational awareness] Will pre-2028 LLMs achieve token-output control?
38% chance
Will RL work for LLMs "spill over" to the rest of RL by 2026?
34% chance
[Situational awareness] Will pre-2026 LLMs achieve token-output control?
30% chance