Will we solve AI alignment by 2026?
Plus
32
Ṁ3555Dec 31
8%
chance
1D
1W
1M
ALL
I would consider it "solved" if we can prove with a high degree of confidence that developing an AGI would have a less than 1% chance of resulting in a misaligned AGI.
Aligned meaning that it essentially does what's "best" for us, whether it is what we tell it we want, or not, it understands our values, and acts accordingly.
If values differ from person to person, it could decide democratically what actions to take, or align itself dynamically to each individual, maybe by creating personalized universe simulations, or some other way.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
51% chance
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
52% chance
Will Meta AI start an AGI alignment team before 2026?
35% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will xAI significantly rework their alignment plan by the start of 2026?
63% chance
Will Tetraspace have published a research paper on AI alignment by March 1, 2025?
18% chance
Will a >$10B AI alignment megaproject start work before 2030?
29% chance
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
59% chance
Will Inner or Outer AI alignment be considered "mostly solved" first?