Will Inner or Outer AI alignment be considered "mostly solved" first?
Plus
11
Ṁ3142031
Inner56%
1D
1W
1M
ALL
As declared by a majority of the consensus at alignmentforum, slatestarcodex, lesswrong, MIRI, and my opinion.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
35% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
51% chance
Will I focus on the AI alignment problem for the rest of my life?
63% chance
Will OpenAI + an AI alignment organization announce a major breakthrough in AI alignment? (2024)
21% chance
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
60% chance
Will OpenAI + an AI alignment organization announce a major breakthrough in AI alignment? (2024)
4% chance
AI honesty #2: by 2027 will we have a reasonable outer alignment procedure for training honest AI?
25% chance
Will OpenAI announce a major breakthrough in AI alignment in 2024?
21% chance
Will xAI significantly rework their alignment plan by the start of 2026?
63% chance
Is AI alignment computable?
34% chance