Will Eliezer Yudkowsky publicly claim to have a P(doom) of less than 50% at any point before 2040?
➕
Plus
56
Ṁ18k
2040
27%
chance

Phrasing here matters, I will count "P(doom)" in text, and also things like "probability of doom", "P doom", "chance we all die" and other similar-enough things in spoken recorded interviews / podcast appearances /etc. as well as in text. I will exercise personal judgement for borderline cases.

P(doom|something else) is not P(doom).

He must explicitly specify a number that is less than 50 or that his number is less than 50 for this market to resolve YES.

Same as this market, but 50% and 2040

Get
Ṁ1,000
and
S3.00
Sort by:

Can sb explain the timespan of the doom?

P(doom) is meaningless if it has no end date.

P(doom by 2100) is meaningful.

P(doom by 2050) is meaningful and would have other value.

Even P(AI doom | doom) would make more sense without having a time limitation for dooms.

If there is no end date, then, even if I 99% confident in ai safety, i would say that eventually humanity loses some intergalactic war or faces some conscious-parasite-ilness, or ... or ... or... (infinite list).

When you do not place any end date, you get Simpsons/Nostradamus effect: any vague thing will eventually happen if given enough time.

(Murphy's law in engineering: anything that can potentially be broken will eventually be broken if you wait long enough. Treat humanity as a thing that we can create insanely many scenarios for it to die in).

I could say that I treat heat death of the universe as a Doom, so my P(doom) would be at least as much as my confidence in heat death.

Another problem.

If I say something like "will Trump kill Biden with his own hands?" (without specifying the end date or that the market resolves negatively when Trump dies or biden dies to other reasons), then we could wait for couple thousand years until some new random 2 guys have the same names and have a conflict.

Those are 2 reasons why a rational person would not want to say anything about time-unlimited doom.

Question is ill-defined.

Let's imagine sb says P(doom)=90%. That would mean that there is 10% doom NEVER happens. That would mean, that with 10% humanity as an entity becomes transcendental indestructable fundamentally immortal system, which is able to survive any threats (Gamma-Ray Bursts, black holes, heat death and so on).

bought Ṁ100 NO

Can't claim things if you're extinct.

Can't enjoy your mana if you're extinct, either

Note: in the worlds where we survive, he will have been forced to update by 2040, so it will necessarily be less than the current >97%

Not necessarily. His claim was never particularly a claim about timelines; yes, at present it looks like timelines are fairly short, but the fundamental argument is about the nature of what a mind is. Human ways of thinking, human values, human "common sense" are not things that come automatically from intelligence in the optimization-power sense. Not only that, but even specifying these things in a robust way is not remotely something we know how to do.

If the decades pass with another AI winter, and AI stagnates at roughly GPT-4 or GPT-5 levels for some 25 years, then that reduces P(doom) to the extent that it makes one think that maybe superintelligent AI will never come. But there are good a priori reasons to think that human intelligence is not some sort of fundamental limit (let alone the subhuman GPT-4 or GPT-5 level.)

I predict that if this comes to pass, Yudkowsky will hold to roughly the argument above. It is entirely in accordance with everything he holds so far. A surprise in the slowness of development of AI should not shift you to thinking that when ASI comes it will be safe; nor does it imply that ASI will never come. Put together, this doesn't imply a reduction in long-term P(doom) as to the ultimate fate of humanity. (It does reduce P(doom this century), but that's not the question here.)

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules