Will xAI significantly rework their alignment plan by the start of 2026?
16
1kṀ1004
Dec 31
32%
chance

Supposedly the xAI plan is something like this:

The premise is have the AI be maximally curious, maximally truth-seeking, I'm getting a little esoteric here, but I think from an AI safety standpoint, a maximally curious AI - one that's trying to understand the universe - I think is going to be pro-humanity from the standpoint that humanity is just much more interesting than not . . . Earth is vastly more interesting than Mars. . . that's like the best thing I can come up with from an AI safety standpoint. I think this is better than trying to explicitly program morality - if you try to program morality, you have to ask whose morality.

This is a terrible idea and I wonder if they are gonna realize that.

Resolves YES if they have meaningfully reworked their alignment plan by the beginning of 2026. They don't have to have a good plan, it resolves YES even if their plan is focused on some other silly principle, but they do have to move meaningfully away from curiosity/truthseeking as the goal for the AI. Adding nuances to curiosity/truthseeking doesn't count unless I become convinced that those nuances genuinely solve alignment.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy