How much AI extinction risk would you accept?
8
84
170
2079
16%
chance

Almost all of the questions on Manifold related to AI risk ask how, why, or when AI will cause humans to go extinct.

Few questions acknowledge that people accept a certain level of extinction risk already (such as by not prohibiting nuclear weapons), and will for AI. Studies have attempted to compute formulas to determine what the ideal risk is, by calculating the value of human lives that might be saved by eliminating diseases like cancer and aging. At present, society believes the potential benefit from AI far exceeds the extinction risk, which is why calls to "stop AI" have fallen flat.

This question asks: if you were in charge of the world, what level of risk would you accept of humanity going extinct during the next 30 years, to maximize the probability that AI enables unimaginable prosperity?

The question does not ask "30 years from the question start," but 30 years from now. 30 years was chosen because it is the remaining life expectancy under current technological levels of a random adult able to post on the Internet, and therefore would be able to experience the benefits and risks. The timeframe always remains 30 years in the future, so the question will never resolve.

As markets are about guessing how others will behave, the price should settle on what risk society is likely to accept in the future, and the way to make profit is by trading society's ultimate answer to this question.

Get Ṁ200 play money
Sort by:

I personally believe that the market will settle much lower, but my personal answer would be a 40% risk of death, which is higher than all the Manifold markets are currently predicting for this question.

More related questions