Will "Orthogonal: A new agent foundations alignment..." make the top fifty posts in LessWrong's 2023 Annual Review?
0
100resolved Feb 11
Resolved
NO1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2023 Review resolves in February 2025.
This market will resolve to 100% if the post Orthogonal: A new agent foundations alignment organization is one of the top fifty posts of the 2023 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Related questions
Related questions
Will "Introducing Alignment Stress-Testing at Anthropic" make the top fifty posts in LessWrong's 2024 Annual Review?
10% chance
Will "Without fundamental advances, misalignment an..." make the top fifty posts in LessWrong's 2024 Annual Review?
50% chance
Will "The Field of AI Alignment: A Postmortem, and ..." make the top fifty posts in LessWrong's 2024 Annual Review?
28% chance
Will "How to replicate and extend our alignment fak..." make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance
Will "Takes on "Alignment Faking in Large Language ..." make the top fifty posts in LessWrong's 2024 Annual Review?
19% chance
Will "Making a conservative case for alignment" make the top fifty posts in LessWrong's 2024 Annual Review?
13% chance
Will "A basic systems architecture for AI agents th..." make the top fifty posts in LessWrong's 2024 Annual Review?
12% chance
Will "Using axis lines for good or evil" make the top fifty posts in LessWrong's 2024 Annual Review?
4% chance
Will "Alignment Faking in Large Language Models" make the top fifty posts in LessWrong's 2024 Annual Review?
94% chance
Will "What Is The Alignment Problem?" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance