
If AGI has an okay outcome, will there be an AGI singleton?
5
Ṁ100Ṁ6482101
25%
chance
1H
6H
1D
1W
1M
ALL
An okay outcome is defined in Eliezer Yudkowsky's market as:
An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.
This resolves YES if I can easily point to the single AGI that has an okay outcome, and NO otherwise.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
If Artificial General Intelligence has an okay outcome, what will be the reason?
Will we get AGI before 2029?
26% chance
Will we get AGI before 2030?
31% chance
When artificial general intelligence (AGI) exists, what will be true?
What will be true about the person who creates AGI?
Will AGI be a problem before non-G AI?
20% chance
By when will we have AGI?
Will AI create the first AGI?
41% chance
Will we get AGI before 2048?
81% chance
Will we get AGI before 2046?
79% chance