Will we solve AI alignment by 2026?
30
37
530
2025
7%
chance

I would consider it "solved" if we can prove with a high degree of confidence that developing an AGI would have a less than 1% chance of resulting in a misaligned AGI.

Aligned meaning that it essentially does what's "best" for us, whether it is what we tell it we want, or not, it understands our values, and acts accordingly.

If values differ from person to person, it could decide democratically what actions to take, or align itself dynamically to each individual, maybe by creating personalized universe simulations, or some other way.

Get Ṁ200 play money
Sort by:

Lol, no.

But I'll keep an eye on this in case I have the liquidity to take whatever donations people wanna give out by betting yes.

bought Ṁ120 of YES

Taking some hopium for my mental health