Will we solve AI alignment by 2026?
37
1kṀ8767
Dec 31
1.4%
chance

I would consider it "solved" if we can prove with a high degree of confidence that developing an AGI would have a less than 1% chance of resulting in a misaligned AGI.

Aligned meaning that it essentially does what's "best" for us, whether it is what we tell it we want, or not, it understands our values, and acts accordingly.

If values differ from person to person, it could decide democratically what actions to take, or align itself dynamically to each individual, maybe by creating personalized universe simulations, or some other way.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy