
Will "Preventing Language Models from hiding their ..." make the top fifty posts in LessWrong's 2023 Annual Review?
1
Ṁ1kṀ50resolved Feb 11
Resolved
NO1H
6H
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2023 Review resolves in February 2025.
This market will resolve to 100% if the post Preventing Language Models from hiding their reasoning is one of the top fifty posts of the 2023 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ7 |
People are also trading
Related questions
Will "Auditing language models for hidden objectives" make the top fifty posts in LessWrong's 2025 Annual Review?
11% chance
Will "Tracing the Thoughts of a Large Language Model" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "Why Do Some Language Models Fake Alignment Wh..." make the top fifty posts in LessWrong's 2025 Annual Review?
16% chance
Will "Do models say what they learn?" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "Small Models Can Introspect, Too" make the top fifty posts in LessWrong's 2025 Annual Review?
23% chance
Will "My model of what is going on with LLMs" make the top fifty posts in LessWrong's 2025 Annual Review?
13% chance
Will "Early Chinese Language Media Coverage of the ..." make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "Scalable End-to-End Interpretability" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "Recent AI model progress feels mostly like bu..." make the top fifty posts in LessWrong's 2025 Annual Review?
13% chance
Will "Natural Latents: Latent Variables Stable Acro..." make the top fifty posts in LessWrong's 2025 Annual Review?
24% chance