Will "AI alignment researchers don't (seem to) stack
" make the top fifty posts in LessWrong's 2023 Annual Review?
Basic
3
Ṁ79Feb 1
29%
chance
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2023 Review resolves in February 2025.
This market will resolve to 100% if the post AI alignment researchers don't (seem to) stack is one of the top fifty posts of the 2023 Review, and 0% otherwise. The market was initialized to 14%.
Get
1,000
and1.00
Related questions
Related questions
Will "AGI Safety and Alignment at Google DeepMind:
..." make the top fifty posts in LessWrong's 2024 Annual Review?
26% chance
Will "There should be more AI safety orgs" make the top fifty posts in LessWrong's 2023 Annual Review?
34% chance
Will "Without fundamental advances, misalignment an..." make the top fifty posts in LessWrong's 2024 Annual Review?
46% chance
Will "There should be more AI safety orgs" make the top fifty posts in LessWrong's 2023 Annual Review?
37% chance
Will ""Carefully Bootstrapped Alignment" is organiz..." make the top fifty posts in LessWrong's 2023 Annual Review?
30% chance
Will "Alignment Implications of LLM Successes: a De..." make the top fifty posts in LessWrong's 2023 Annual Review?
37% chance
Will "Tips for Empirical Alignment Research" make the top fifty posts in LessWrong's 2024 Annual Review?
24% chance
Will "Why Not Just... Build Weak AI Tools For AI Al..." make the top fifty posts in LessWrong's 2023 Annual Review?
24% chance
Will "Why was the AI Alignment community so unprepa..." make the top fifty posts in LessWrong's 2023 Annual Review?
13% chance
Will ""Carefully Bootstrapped Alignment" is organiz..." make the top fifty posts in LessWrong's 2023 Annual Review?
28% chance