Will "Manifold: If okay AGI, why?" make the top fifty posts in LessWrong's 2023 Annual Review?
1
100Ṁ7resolved Feb 11
Resolved
NO1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2023 Review resolves in February 2025.
This market will resolve to 100% if the post Manifold: If okay AGI, why? is one of the top fifty posts of the 2023 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ1 |
Related questions
Related questions
Will "My AGI safety research—2024 review, ’25 plans" make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance
Will "Principles for the AGI Race" make the top fifty posts in LessWrong's 2024 Annual Review?
11% chance
Will "AGI Safety and Alignment at Google DeepMind:
..." make the top fifty posts in LessWrong's 2024 Annual Review?
26% chance
Will "The Field of AI Alignment: A Postmortem, and ..." make the top fifty posts in LessWrong's 2024 Annual Review?
28% chance
Will "Me, Myself, and AI: the Situational Awareness..." make the top fifty posts in LessWrong's 2024 Annual Review?
16% chance
Will "things that confuse me about the current AI m..." make the top fifty posts in LessWrong's 2024 Annual Review?
13% chance
Will "AIs Will Increasingly Attempt Shenanigans" make the top fifty posts in LessWrong's 2024 Annual Review?
11% chance
Will "Modern Transformers are AGI, and Human-Level" make the top fifty posts in LessWrong's 2024 Annual Review?
12% chance
Will "The Hopium Wars: the AGI Entente Delusion" make the top fifty posts in LessWrong's 2024 Annual Review?
13% chance
Will "A basic systems architecture for AI agents th..." make the top fifty posts in LessWrong's 2024 Annual Review?
12% chance