The superintelligence will brainwash society into declaring alignment researchers (with high pdooms) insane.
9
100Ṁ1642100
45%
chance
5
1H
6H
1D
1W
1M
ALL
Will resolve 'yes' if @EliezerYudkowsky is institutionalized and blames it on AI. Bonus points if he somehow also relates it to the crucifixion. *no actual value*
Will resolve 'no' when AI "goes well".
Update 2025-06-25 (PST) (AI summary of creator comment): If AI demonstrably does not go well (i.e. humanity has clearly lost control) and alignment researchers are neutralized by means other than institutionalization or public excommunication, the market will resolve to No.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
What organization will first create superintelligence?
Will AI suggest people do tFUS or psychedelics to make it more capable of superhuman persuasion on them by EOY 2029?
34% chance
Will humanity wipe out superintelligent AI before 2040?
15% chance
Will AI be capable of superhuman persuasion well before (>1yr) superhuman general intelligence?
72% chance
Will superintelligent AI take over humanity by 2030?
25% chance
Will there be geopolitical instability over fears of other countries reaching superintelligence? (ACX, AI 2027 #2)
68% chance
Conditional on no existential catastrophe, will there be a superintelligence by 2100?
90% chance
Conditional on no existential catastrophe, will there be a superintelligence by 2040?
62% chance
Conditional on no existential catastrophe, will there be a superintelligence by 2050?
72% chance
Conditional on no existential catastrophe, will there be a superintelligence by 2075?
85% chance