Will we have any progress on the interpretability of State Space Model LLM’s in 2024?
11
65
Ṁ112Ṁ430
2025
65%
chance
1D
1W
1M
ALL
State Space Models like Mamba introduce new possibilities, as States are a new object type, a compressed snapshot of a mind at a point in time which can be saved, restored, and interpreted. But a cursory search didn’t turn up any work on interpreting either States or State Space Models.
This resolves Yes if research comes out that makes any significant interpretability progress into a state space large language model. I will not bet on this market.
Get Ṁ200 play money
Related questions
Will AI be able to solve confusing but elementary geometric reasoning problems in 2024?
48% chance
Will mechanistic interpretability be essentially solved for GPT-2 before 2030?
23% chance
Will a model costing >$30M be intentionally trained to be more mechanistically interpretable by end of 2027? (see desc)
57% chance
Will there be a gpt-4 quality LLM with distributed inference by the end of 2024?
31% chance
By 2028 will we be able to identify distinct submodules/algorithms within LLMs?
75% chance
Will an LLM Built on a State Space Model Architecture Have Been SOTA at any Point before EOY 2027? [READ DESCRIPTION]
58% chance
Will language models be able to solve simple graphical mazes by the end of 2025?
65% chance
Will there be an LLM which can do fluent conlang translations by EOY 2024?
67% chance
Will we be able to estimate the feature importance curve or feature sparsity curve of real models? (2024 end)
62% chance