
What do Manifold users consider the single greatest risk associated with development of AI?
26
725αΉ1111resolved Jan 1
100%79%
Human extinction
1.1%
Surveillance capability
1.9%
Military applications
1.1%
Creativity reverts to the mean -- innovation in music, storytelling and design becomes impossible
1.1%
AI works everyone out of a job and UBI can't save us since wealth is no longer being generated
1.1%
Falls into hands of bad actors
1.1%
This is a secret too dangerous for me to share
1.0%
AI will take everyone's mana by being too good at predicting stuff
13%Other
The options listed are the ones I came up with off top of head. Feel free to add. Resolves to a single answer. I don't intend to bet in this market.
Updated close time to 12/31/2023. Thank you.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
π Top traders
# | Name | Total profit |
---|---|---|
1 | αΉ88 | |
2 | αΉ56 | |
3 | αΉ51 | |
4 | αΉ27 | |
5 | αΉ22 |
People are also trading
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
67% chance
Will AI wipe out Manifold by 2030?
5% chance
At the beginning of 2028, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
68% chance
At the beginning of 2029, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
77% chance
At the beginning of 2030, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
73% chance
At the beginning of 2040, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
69% chance
At the beginning of 2027, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
69% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance
Will an AI model outperform 95% of Manifold users on accuracy before 2026?
25% chance
What do you believe is the upper threshold of harm that AI/machine systems could cause this century (w/ 1-99% threat)?
Sort by:
@asmith I agree. I'm not expecting traders to calculate an expected value based on each risk though. That would be very silly considering the broad ranges involved, plus what if someone thinks several of the possibilities existential threats. I'm inclined to think resolving this ambiguity would be a distraction but I'm open to persuasion.
People are also trading
Related questions
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
67% chance
Will AI wipe out Manifold by 2030?
5% chance
At the beginning of 2028, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
68% chance
At the beginning of 2029, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
77% chance
At the beginning of 2030, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
73% chance
At the beginning of 2040, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
69% chance
At the beginning of 2027, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
69% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance
Will an AI model outperform 95% of Manifold users on accuracy before 2026?
25% chance
What do you believe is the upper threshold of harm that AI/machine systems could cause this century (w/ 1-99% threat)?