Will Eliezer believe in mankind's survival again?

Eliezer Yudkowsky and MIRI currently pursue a "death with dignity" strategy. Read this blog post or watch this interview. I assume Eliezer believes it with 99+% confidence as the blog posts says "0% survival".

This resolves YES, if within the next 15 years Eliezer lowers this confidence to 80% or lower. Also YES, if he extends the timeline by at least 10 years even if the doom confidence is still high.

Resolves NO, if Eliezer is dead by 2038 no matter what reason (AI or not).

Also NO, if he still forecasts certain doom like today.

Change 2023-03-14: Removed the "3-15 years timeline" since I believe Eliezer probably did not intend his words to be taken as a timeline forecast. In its place, if he makes a statement that doom is "delayed by 10+ years" resolves as YES. Implicitly, an additional NO resolution is if he is alive but still predicting doom at the time of close.

Sort by:
marktweise avatar

Very similar:

jonsimon avatar
Jon Simonis predicting NO at 58%

He's got too much psychologically invested in the doomer narrative at this point. The AI-pocalypse is going to fail to materialize, and then he's going to do what every (and I mean this with the utmost respect) suicide cult leader does, which is to modify the timeline but continue harping that the end is nigh.

jonsimon avatar
Jon Simonis predicting NO at 58%

@jonsimon Very sorry, I meant "doomsday cult" not "suicide cult". That was my mistaken wording, I know there's no advocating of suicide.

For reference on what has historically happened in these situations, see: https://slate.com/technology/2011/05/apocalypse-2011-what-happens-to-a-doomsday-cult-when-the-world-doesn-t-end.html

DavidMathers avatar
David Mathersis predicting YES at 58%

@jonsimon I agree this is likely, but as I read the resolution, if he's still around to extend the timeliness in 2038, this would actually resolve "yes" at that point.

jonsimon avatar
Jon Simonis predicting NO at 70%

@DavidMathers ohhh you're right I should have read the description more carefully. Well then, time to flip my bet.

MartinRandall avatar
Martin Randallsold Ṁ439 of NO

@copacetic where are you getting 3-15 years at 99.9% from?

In the linked interview he says that 30 years is "unlikely".

Eliezer: Timelines are very hard to project. 30 years does strike me as unlikely at this point. But, you know, timing is famously much harder to forecast than saying that things can be done at all. You know, you got your people saying it will be 50 years out two years before it happens, and you got your people saying it'll be two years out 50 years before it happens.


MaxPayne avatar
Max Payne

@MartinRandall Quote from the transcript:

„How on earth would I know? It could be three years. It could be 15 years. We could get that AI winter I was hoping for, and it could be 16 years. I'm not really seeing 50 without some kind of giant civilizational catastrophe.“

MartinRandall avatar
Martin Randallsold Ṁ29 of NO

@MaxPayne I see that, but "how on earth would I know?" and "it could be 16 years" is not assigning a 99.9% chance of 3-15 years, so I don't know how this works for the market resolution.

MartinRandall avatar
Martin Randallsold Ṁ306 of NO

As written I think it resolves YES now because EY has extended the timeline beyond 15 years, or it resolves N/A now because the question is based on a false premise.

marktweise avatar
marktweiseis predicting NO at 75%

@MartinRandall I had not seen the Q&A yet. Thanks for the pointer.

I hate that this question now turned into an interpretation game. The core question to me is if Eliezer changes his mind and that has not happened in the last weeks as far as I can tell, so I'm not resolving it as YES. I have lost confidence about my assumption though.

Does anybody know a clearer statement about the current belief of @EliezerYudkowsky?

MartinRandall avatar
Martin Randallis predicting YES at 75%

@copacetic My read of the texts is that he's very confident of human extinction, but uncertain about timelines, and skeptical of attempts to become certain. The below text is from 2021, so may be outdated, but I think is representative.


marktweise avatar
marktweiseis predicting NO at 75%

I changed the resolution criteria to a vague timeline forecast. This seems to be the only fair change to me. If it triggers too many complaints, I can still use N/A.

MaxPayne avatar
Max Payne

Related since it essentially asks where Eliezer is wrong:

MartinRandall avatar
Martin Randallis predicting NO at 54%
tftftftftftftftftftftftf avatar
Arcgisbought Ṁ10 of NO

The "again" in the post title implies something that I don't think is totally true.

MartinRandall avatar
Martin Randallbought Ṁ100 of NO

@tessabarton Eg:

alarm bells went off for me in 2015, which is when it became obvious that this is how it was going to go down.


Prior to 2015 I think he reads as more hopeful. In particular I'm struck by his writing on cryonics as indicative of believing in a chance of a big future for humanity.


If we are all going to be eaten by Shoggoths then not signing up my kids for a slightly increased chance of being eaten by Shoggoths does not make me a lousy parent.

ML avatar
MLis predicting NO at 46%

@tessabarton He has written that as a teenager he was all for creating AI as fast as possible (what might now be called an accelerationist) until he gradually figured out the danger. https://www.lesswrong.com/posts/fLRPeXihRaiRo5dyX/the-magnitude-of-his-own-folly

MaxPayne avatar
Max Payne

@EliezerYudkowsky is not on his historical maximum: