
Will I come to believe that Eliezer Yudkowsky is substantially wrong about AGI being an X-risk?
9
190Ṁ430resolved May 6
Resolved
NO1H
6H
1D
1W
1M
ALL
"Substantially wrong" does not include merely being wrong about timing. If I come to believe that, yes, AGI is still an X-risk, but we will have more time before we might all die than Eliezer thinks, this does not count as a substantive disagreement for the purposes of this market.
This is in the "Change My Mind" group - so feel free to debate me in the comments.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ46 | |
2 | Ṁ8 | |
3 | Ṁ8 | |
4 | Ṁ1 | |
5 | Ṁ0 |
People are also trading
Related questions
Will Eliezer Yudkowsky win his $150,000 - $1,000 bet about UFOs not having a worldview-shattering origin?
92% chance
Will Eliezer Yudkowsky use AGI to figure out why he was wrong?
29% chance
Will I gain any significant insight from reading Eliezer Yudkowsky's new book?
36% chance
Will Eliezer Yudkowsky get fully blaked before 2027?
19% chance
Will I significantly deconvert Eliezer Yudkowsky from Bayesianism by the end of 2025?
2% chance
At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?
55% chance
Will Eliezer Yudkowsky get killed by an AI?
10% chance
Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
9% chance
Which well-known scientist will Eliezer Yudkowsky have a long recorded conversation with about AI risk, before 2026?
Conditional on AGI being available by 2045, will Eliezer Yudkowsky be declared enemy #1 by the AI during the first 60 days?
7% chance