At Manifest 2024 will @jim convince Eliezer Yudkowsky that an AI moratorium would be a bad idea?
33
2kṀ46kresolved Jun 10
Resolved
NO1H
6H
1D
1W
1M
ALL
the resolution criterion is slightly loose, goal is just to convince Yudkowsky that a risk of human extinction is justified by the chance of the creation of a cool AI
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ1,245 | |
2 | Ṁ890 | |
3 | Ṁ438 | |
4 | Ṁ226 | |
5 | Ṁ190 |
People are also trading
Related questions
It's end of 2025, a global AI moratorium is in effect, Eliezer Yudkowsky endorses it. What were its decisive causes?
Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
7% chance
If Elon Musk does something as a result of his AI angst by 2025, will Eliezer Yudkowsky judge it to be a positive or neutral initiative (as opposed to negative)?
12% chance
Which well-known scientist will Eliezer Yudkowsky have a long recorded conversation with about AI risk, before 2026?
Will Eliezer Yudkowsky work for any major AI-related entity by 2027?
20% chance
At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?
54% chance
Will Eliezer Yudkowsky get killed by an AI?
10% chance
Conditional on AGI being available by 2045, will Eliezer Yudkowsky be declared enemy #1 by the AI during the first 60 days?
7% chance
Will Eliezer Yudowsky become romantically involved with an AI before 2030?
15% chance
By 2028, will Eliezer Yudkowsky announce he's in love with an AI girlfriend?
3% chance