Will "LLMs for Alignment Research: a safety priority?" make the top fifty posts in LessWrong's 2024 Annual Review?
1
120Ṁ102026
13%
chance
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2024 Review resolves in February 2026.
This market will resolve to 100% if the post LLMs for Alignment Research: a safety priority? is one of the top fifty posts of the 2024 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Related questions
Related questions
Will "LLM Applications I Want To See" make the top fifty posts in LessWrong's 2024 Annual Review?
12% chance
Will "LLMs can learn about themselves by introspection" make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance
Will "Takes on "Alignment Faking in Large Language ..." make the top fifty posts in LessWrong's 2024 Annual Review?
19% chance
Will "LLM Generality is a Timeline Crux" make the top fifty posts in LessWrong's 2024 Annual Review?
13% chance
Will "Alignment Faking in Large Language Models" make the top fifty posts in LessWrong's 2024 Annual Review?
94% chance
Will "Connecting the Dots: LLMs can Infer & Verbali..." make the top fifty posts in LessWrong's 2024 Annual Review?
18% chance
Will "Without fundamental advances, misalignment an..." make the top fifty posts in LessWrong's 2024 Annual Review?
50% chance
Will "Introducing Alignment Stress-Testing at Anthropic" make the top fifty posts in LessWrong's 2024 Annual Review?
10% chance
Will "Lsusr's Rationality Dojo" make the top fifty posts in LessWrong's 2024 Annual Review?
17% chance
Will "How to replicate and extend our alignment fak..." make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance