
Will "Sparse Autoencoders Find Highly Interpretable..." make the top fifty posts in LessWrong's 2023 Annual Review?
1
Ṁ1kṀ150resolved Feb 11
Resolved
NO1H
6H
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2023 Review resolves in February 2025.
This market will resolve to 100% if the post Sparse Autoencoders Find Highly Interpretable Directions in Language Models is one of the top fifty posts of the 2023 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ20 |
People are also trading
Related questions
Will "Scalable End-to-End Interpretability" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "Interpretability Will Not Reliably Find Decep..." make the top fifty posts in LessWrong's 2025 Annual Review?
38% chance
Will "An Ambitious Vision for Interpretability" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance
Will "A Pragmatic Vision for Interpretability" make the top fifty posts in LessWrong's 2025 Annual Review?
24% chance
Will "Deep learning as program synthesis" make the top fifty posts in LessWrong's 2026 Annual Review?
17% chance
Will "OpenAI #10: Reflections" make the top fifty posts in LessWrong's 2025 Annual Review?
29% chance
Will "Attribution-based parameter decomposition" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "How To Become A Mechanistic Interpretability ..." make the top fifty posts in LessWrong's 2025 Annual Review?
50% chance
Will "How AI Is Learning to Think in Secret" make the top fifty posts in LessWrong's 2026 Annual Review?
21% chance
Will "Natural Latents: Latent Variables Stable Acro..." make the top fifty posts in LessWrong's 2025 Annual Review?
24% chance