Will Inner or Outer AI alignment be considered "mostly solved" first?
11
1kṀ3142031
Inner56%
1H
6H
1D
1W
1M
ALL
As declared by a majority of the consensus at alignmentforum, slatestarcodex, lesswrong, MIRI, and my opinion.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will we solve AI alignment by 2026?
2% chance
Will xAI significantly rework their alignment plan by the start of 2026?
20% chance
Will Meta AI start an AGI alignment team before 2026?
21% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Will I focus on the AI alignment problem for the rest of my life?
38% chance
AI honesty #2: by 2027 will we have a reasonable outer alignment procedure for training honest AI?
25% chance
How difficult will Anthropic say the AI alignment problem is?
Will the 1st AGI solve AI Alignment and build an ASI which is aligned with its goals?
17% chance