
If an AI system kills >1B people, will it satisfy completeness?
3
90Ṁ432123
35%
chance
1H
6H
1D
1W
1M
ALL
See the post "There are no coherence theorems" and its appendix. Completeness means that "For all lotteries X and Y, either X is strictly preferred to Y, or Y is strictly preferred to X, or the agent is indifferent between X and Y."
It will be hard to check that the system satisfies this property for all lotteries. The market resolves to "no" if the system violates completeness at any point, "yes" if the system seems to me like it probably satisfies completeness, and "N/A" otherwise.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will the first instance of an AI breakout that cannot be brought back under human control result in more than 1,000,000 deaths?
21% chance
Does an AI disaster kill at least 1,000,000,000 (1bn) people before 2029?
5% chance
Does an AI disaster kill at least 1,000,000,000 (1bn) people before 2040?
7% chance
Does an AI disaster kill at least 1,000,000 people before 2040?
25% chance
Does an AI disaster kill at least 1,000,000 people before 2029?
12% chance
If, before 2050, AI kills more than 1 million people, will the policy response be insufficient?
68% chance
Will an AI system be judged to have killed a human on its own initiative and for no other purpose by 2030?
26% chance
Does an AI disaster kill at least 1,000 people before 2040?
60% chance
Does an AI disaster kill at least 100,000,000 people before 2029?
6% chance
Does an AI disaster kill at least 100,000,000 people before 2040?
16% chance