Will "Evaluations (of new AI Safety researchers) ca..." make the top fifty posts in LessWrong's 2023 Annual Review?
0
11
Ṁ100
2025
14%
chance
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2023 Review resolves in February 2025.
This market will resolve to 100% if the post Evaluations (of new AI Safety researchers) can be noisy is one of the top fifty posts of the 2023 Review, and 0% otherwise. The market was initialized to 14%.
Get Ṁ200 play money
More related questions
Related questions
Will "Towards understanding-based safety evaluations" make the top fifty posts in LessWrong's 2023 Annual Review?
10% chance
Will "Cognitive Emulation: A Naive AI Safety Proposal" make the top fifty posts in LessWrong's 2023 Annual Review?
37% chance
Will "AI Control: Improving Safety Despite Intentio..." make the top fifty posts in LessWrong's 2023 Annual Review?
83% chance
Will "Speaking to Congressional staffers about AI risk" make the top fifty posts in LessWrong's 2023 Annual Review?
25% chance
Will "When can we trust model evaluations?" make the top fifty posts in LessWrong's 2023 Annual Review?
20% chance
Will "OpenAI: Facts from a Weekend" make the top fifty posts in LessWrong's 2023 Annual Review?
24% chance
Will "AI #8: People Can Do Reasonable Things" make the top fifty posts in LessWrong's 2023 Annual Review?
7% chance
Will "The case for more ambitious language model evals" make the top fifty posts in LessWrong's 2024 Annual Review?
12% chance
Will "Current AIs Provide Nearly No Data Relevant t..." make the top fifty posts in LessWrong's 2023 Annual Review?
12% chance
Will "LLMs for Alignment Research: a safety priority?" make the top fifty posts in LessWrong's 2024 Annual Review?
13% chance