
Will ""AI Alignment" is a Dangerously Overloaded Term" make the top fifty posts in LessWrong's 2023 Annual Review?
1
Ṁ70Ṁ10resolved Feb 11
Resolved
NO1H
6H
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2023 Review resolves in February 2025.
This market will resolve to 100% if the post "AI Alignment" is a Dangerously Overloaded Term is one of the top fifty posts of the 2023 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ1 |
People are also trading
Related questions
Will "Announcing: OpenAI's Alignment Research Blog" make the top fifty posts in LessWrong's 2025 Annual Review?
6% chance
Will "Alignment Pretraining: AI Discourse Causes Se..." make the top fifty posts in LessWrong's 2025 Annual Review?
23% chance
Will "AGI Safety & Alignment @ Google DeepMind is h..." make the top fifty posts in LessWrong's 2025 Annual Review?
7% chance
Will "Pretraining on Aligned AI Data Dramatically R..." make the top fifty posts in LessWrong's 2026 Annual Review?
13% chance
Will "What Is The Alignment Problem?" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance
Will "Shallow review of technical AI safety, 2025" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "xAI's new safety framework is dreadful" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "AI Governance to Avoid Extinction: The Strate..." make the top fifty posts in LessWrong's 2025 Annual Review?
11% chance
Will "AI Control May Increase Existential Risk" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance