
Will I come to believe that Eliezer Yudkowsky is substantially wrong about AGI being an X-risk?
9
Ṁ190Ṁ430resolved May 6
Resolved
NO1H
6H
1D
1W
1M
ALL
"Substantially wrong" does not include merely being wrong about timing. If I come to believe that, yes, AGI is still an X-risk, but we will have more time before we might all die than Eliezer thinks, this does not count as a substantive disagreement for the purposes of this market.
This is in the "Change My Mind" group - so feel free to debate me in the comments.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ46 | |
| 2 | Ṁ8 | |
| 3 | Ṁ8 | |
| 4 | Ṁ1 | |
| 5 | Ṁ0 |
Sort by:
@harfe A P(doom) of 10% isn't significantly different from a P(doom) of 90%, to me. Both are unacceptably high.
People are also trading
Related questions
Will Eliezer Yudkowsky win his $150,000 - $1,000 bet about UFOs not having a worldview-shattering origin?
95% chance
Will Eliezer Yudkowsky use AGI to figure out why he was wrong?
36% chance
At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?
59% chance
Will Eliezer Yudkowsky get killed by an AI?
11% chance
Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
10% chance
Conditional on AGI being available by 2045, will Eliezer Yudkowsky be declared enemy #1 by the AI during the first 60 days?
7% chance
Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?
18% chance
If most of the economy becomes automated without us all dying, will Eliezer Yudkowsky admit that he was wrong?
46% chance
Will anyone make a YouTube video seriously claiming to "debunk" Eliezer Yudkowsky on AI risk?
85% chance
Will Eliezer Yudkowsky work for any major AI-related entity by 2027?
20% chance