Will a misaligned AI kill 1% of the world population within any 12 month period before 2035?
➕
Plus
17
Ṁ1081
2035
14%
chance

YES = if a misaligned AI is the primary cause of death of at least 1% of the population (~81-85M deaths based on current projections) within any 12 month period that ends before 2035.

Get
Ṁ1,000
and
S3.00
Sort by:

How would it resolve if a terrorist group programmed drones to seek out and kill 100m people (based on some demographic)?

How would it resolve if an ai that was meant to manage a power grid failed so 100m people froze to death?

Like, is the question about the AI willing death in some sense, or would a failure causing death or an ai programmed to kill count?

@Mad What I have in mind is a misaligned AI rather than humans misusing AI or failures due to some software bug / misclassification in a critical AI system. So both the scenarios that you have mentioned would resolve NO.

However, if something similar to the second scenario occurred because the AI's internal reward function compelled it to (for example it was trying to minimize the energy usage), then it would resolve YES.

@Mad And thanks for asking the question. I will add the clarification to the question.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules