I would consider it "solved" if we can prove with a high degree of confidence that developing an AGI would have a less than 1% chance of resulting in a misaligned AGI.
Aligned meaning that it essentially does what's "best" for us, whether it is what we tell it we want, or not, it understands our values, and acts accordingly.
If values differ from person to person, it could decide democratically what actions to take, or align itself dynamically to each individual, maybe by creating personalized universe simulations, or some other way.
Get Ṁ200 play money
More related questions
Related questions
Will Meta AI start an AGI alignment team before 2026?
35% chance
Will a large scale, government-backed AI alignment project be funded before 2025?
15% chance
Will OpenAI + an AI alignment organization announce a major breakthrough in AI alignment? (2024)
49% chance
Will deceptive misalignment occur in any AI system before 2030?
67% chance
Will there be a well accepted formal definition of value alignment for AI by 2030?
25% chance
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
48% chance
Will OpenAI announce a major breakthrough in AI alignment in 2024?
42% chance
Will Kurzgesagt release a video specifically about the AI Alignment Problem before the end of 2024?
68% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
37% chance
Will a very large-scale AI alignment project be funded before 2025?
17% chance