(LW Interesting Disagreements) Prosaic Alignment is currently more important to work on than Agent Foundations work.
11
Never closes
Yes
No
Unsure / Results
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Sort by:
People are also trading
Related questions
Will Inner or Outer AI alignment be considered "mostly solved" first?
In 5 years will I think the org Conjecture was net good for alignment?
57% chance
Will "Alignment remains a hard, unsolved problem" make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
How difficult will Anthropic say the AI alignment problem is?
Will ARC's Heuristic Arguments research substantially advance AI alignment before 2027?
15% chance
Will "Alignment will happen by default. What’s next?" make the top fifty posts in LessWrong's 2025 Annual Review?
7% chance
Will the 1st AGI solve AI Alignment and build an ASI which is aligned with its goals?
17% chance