At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?
➕
Plus
192
Ṁ39k
2035
59%
chance

If Eliezer believes that there's at least a 75% chance of an AI existential risk coming to pass within the next 50 years, this resolves YES.

I'll resolve the market based on public statements from them in the previous and subsequent few months. Eliezer doesn't like putting explicit probabilities on this, so I'll attempt to infer their beliefs from their more subjective statements.

Resolves N/A in the event that Eliezer is no longer alive/conscious or AI doom has already occurred.

Get
Ṁ1,000
and
S3.00
Sort by:
sold Ṁ36 YES

I think it will play out pretty soon, so either we all dead, or we entered some stable regime.

@Lavander just because things are good/stable for a while doesn't necessarily mean we are safe, but if we're in a stable region, and we have time and tools to prove the stability, I think EY will come around and celebrate

I think convincing Eliezer of no-doom is a strictly superhuman task, but this market effectively conditions on the existence of superhuman, non-doom AI so... 50%?

bought Ṁ5 YES

@JacobPfau it does? It could just be no AGI by then.

I genuinely do not see any likely possible futures or class of futures where he updates downwards.

predictedNO

@Lorxus Psychologically, his feeling of doom is the degree to which he does not feel respected by the mainstream of AI researchers. This is why he said that he DID update downwards when people said things respectful of his opinions.

Nonetheless I do not think this will prevent empirical facts (i.e. the world continuing to not be destroyed) from preventing him from updating downwards. It will always just be a smaller update than it should be.

@DavidBolin The world not having been destroyed is not relevant evidence that his prediction was wrong, due to the anthropic principle.

predictedNO

@IsaacKing It is relevant evidence.

If you believe that there is a 100% chance that the world will not exist tomorrow, your belief is proven to be false if you survive until tomorrow, and the chance you were right goes to 0%.

If you believe that there is a 99.99% chance that the world will not exist tomorrow, there is similarly a Bayesian update that you should make against the theory that led you to assign those odds, and the anthropic principle cannot stop this from happening any more than in the 100% case.

predictedNO

@IsaacKing Notably even Jehovah's Witnesses have become very evidently less sure about their theories about the end of the world, and they have done so precisely because it has failed to end, repatedly.

To be precise, it is enough if you are able to witness one half of the experiment; you do not have actually be able to witness the opposite, as long as you not witnessing anything at all is a coherent idea.

@DavidBolin Interesting. Have a link to a more in-depth explanation I can read?

predictedYES

@IsaacKing That's Nick Bostrom's perspective, which is wrong. If a hypothesis predicts the world will be destroyed, and the world is not destroyed, then that counts as evidence against the hypothesis.

bought Ṁ10 NO

@Lorxus After AGI and ASI are ubiquitous and haven't even tried to do anything harmful?

@Lorxus His prediction is that when superintelligent AI is created, it will wipe out humanity. It's not open-ended. If you're arguing that superintelligence isn't going to happen at all, you're having a completely different conversation, where Yudkowsky isn't particularly relevant.

predictedNO

If it turns out that we are in a simulation, not created by humans of any kind, but by the AI that wiped out humanity, I assume this resolves N/A on the grounds that "AI doom has already occurred"?

@DavidBolin I think this should resolve based on what happens in our universe, not any above us.

In the worlds where machine learning hits a hard limit and we have to do a lot more work to get gains -> then he probably updates down to a degree due to the extra time for alignment and focus on more interpretable techniques

In the worlds where we get global coordination around stalling AI -> updates down because of greater chance that we'll stall for long enough

In the worlds where we made an aligned AGI -> updates down significantly, because we succeeded

In the worlds where we are just barreling along and ML has just been slower than expected -> he probably updates upwards to some degree if there hasn't been significant alignment progress

I think a lot of his beliefs in these worlds depend significantly on 'how much alignment progress have we gotten?'. So it is then some mix of:

  • how much evidence failing to get AGI gives

    • did ML hit hard barriers?

    • have we actually throw a massive amount of ML-assisted optimization work at them yet?

    • have we started pivoting to a new alternative by then, or is most work on AGI now smaller projects since ML is failing?

    • is ML just being slower rather than a hard stop?

    • or did we find an alternative method that works nicer for a lot of problems, but is somehow less capable at strong generalization?)

  • how much alignment progress have we made?

    • have we stayed at the current level of progress speed, or accelerated notably?

      • I'd expect accelerating a good bit, because decent theorem proving AIs look completely possible right now, and shouldn't need large advancement in ML

    • have we had much success in translating neural nets into more formal approaches and then handling them nicely?

  • How powerful are the current paradigms?

    • Like, if it is just ML having a slow patch but then it picks back up significantly then obviously you update downwards in the intervening time and update upwards once you see it was just a rough patch (or just update directly if you predict that)

    • If ML hit a hard limit and the current advancements are lackluster, then you probably update downwards since it seems like we have time

  • What is general background opinion on alignment-esque work?

    • if governments are coordinating remotely decently, then that helps a lot with timelines

    • if ML becomes more aware of alignment problems, then that could help in cultivating a culture which looks at a project making something scary simply stalling out rather than continuing

There's probably a bunch more, but these are the kinds of important things to consider (if your probability isn't dominated by how likely you think Eliezer is to properly update, which mine isn't). The landscape would be annoyingly complicated, which makes me kindof annoyed in predicting a specific correlation of that landscape.

predictedYES

@Aleph I'd like to add large geopolitical conflicts. They could considerably alter the amount of available compute, or change the feasibility of multinational cooperation.

@Aleph Well said.

His attitude toward forecasting makes it unlikely he'd update down so much. To him, foom risk can live on indefinitely, God-of-Gaps style, wherever there is lingering uncertainty.

Who knows, maybe he'll surprise me, and admit he wildly overupdated on his thought experiments. But that's not the behavior I'm seeing, so, probably not. Probably doesn't help that this is his main source of status now.

predictedYES

@Jotto999 It boggles my mind how the Yudkowsky of last year's 'Death With Dignity' post, who seems enslaved by his own intellectual arrogance and cassandra complex, could possibly the same person who once demonstrated the comprehensive understanding of self-aware rationality required to write How To Really Change Your Mind (and to a lesser extent the other 'sequences'). It's the ultimate cautionary tale that knowing the principles means very little if you only practice them selectively.

(to be clear, i'm not saying the chances of AI doom aren't disturbingly high, it's just not 'obviously almost guaranteed' in the way that he now seems dogmatically attached to thinking)

@AngolaMaldives I think another one of the basic tenants of rationality is considering that you might be the one who's wrong, and not assuming that the bias must be in the other person. :)

@AngolaMaldives I think someone in 1900 would reasonably have said that their death is obviously almost guaranteed and I wouldn't expect them to change their mind about that if they lived to 1920.

@AngolaMaldives can't discount that it might be obvious to him and not to us :)

Bet No, because if we're still alive by then we probably made a lot of progress on alignment/safety.

@YoavTzfati Or a slow takeoff

@YoavTzfati or technological stagnation

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules