Will Yudkowsky claim that he is more than 50% sure that AI will kill everyone no later than 1 year after the claim?
➕
Plus
66
Ṁ7690
2036
30%
chance

Criteria for Resolution:

  1. Claim of Probability: Eliezer Yudkowsky must make a public claim that he is more than 50% sure that AI will kill everyone no later than 1 year after the claim is made. The claim must be made in all seriousness, not as a joke or sarcasm.

  2. Types of Qualifying Statements: Statements such as "certain”, "pretty sure”, "almost sure”, “rather yes than no”, or any similar expressions that indicate a probability of higher than 50% will qualify, beyond explicit quantitative claims.

  3. If resolution criteria are not met, the market will resolve NO on January 1, 2036.

Verification of Ambiguity:

- In cases where the claim is ambiguous, efforts will be made to explicitly ask Yudkowsky if his statement means that the probability is higher than 50%.

A related market (the same question but 90% instead of 50%): https://manifold.markets/IhorKendiukhov/will-yudkowsky-claim-that-he-is-mor-f2h2nq5epx

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ50 NO

No, that would be falsifiable and we can't have that...

@FergusArgyll All of his major predictions are falsifiable, if sometimes not positively verifiable. "When superintelligent AI is created, it will kill everyone" can be proven wrong if superintelligent AI doesn't immediately wipe out humanity upon being created, although we'll only be there to see it if it is wrong.

(If you're arguing that superintelligence isn't going to happen at all, you're having a completely different conversation, where Yudkowsky isn't particularly relevant.)

.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules