Will "Sparse Autoencoders Find Highly Interpretable..." make the top fifty posts in LessWrong's 2023 Annual Review?
Plus
1
Ṁ150Feb 1
12%
chance
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2023 Review resolves in February 2025.
This market will resolve to 100% if the post Sparse Autoencoders Find Highly Interpretable Directions in Language Models is one of the top fifty posts of the 2023 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will "Towards Developmental Interpretability" make the top fifty posts in LessWrong's 2023 Annual Review?
69% chance
Will "Toward A Mathematical Framework for Computati..." make the top fifty posts in LessWrong's 2024 Annual Review?
46% chance
Will "Overview of strong human intelligence amplifi..." make the top fifty posts in LessWrong's 2024 Annual Review?
45% chance
Will "LeCun’s “A Path Towards Autonomous Machine In..." make the top fifty posts in LessWrong's 2023 Annual Review?
18% chance
Will "Revealing Intentionality In Language Models T..." make the top fifty posts in LessWrong's 2023 Annual Review?
24% chance
Will "Scaling and evaluating sparse autoencoders" make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance
Will "Natural Latents: The Math" make the top fifty posts in LessWrong's 2023 Annual Review?
15% chance
Will "Natural Latents: The Math" make the top fifty posts in LessWrong's 2023 Annual Review?
16% chance
Will "Introducing Leap Labs, an AI interpretability..." make the top fifty posts in LessWrong's 2023 Annual Review?
14% chance
Will "If interpretability research goes well, it ma..." make the top fifty posts in LessWrong's 2023 Annual Review?
14% chance