If Artificial General Intelligence has an okay outcome, what will be the reason?
349
17kṀ180k
2200
67%
Moral Realism is true, the AI discovers this and the One True Morality is human-compatible.
7%
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
5%
Eliezer finally listens to Krantz.
4%
Other
3%
Humans become transhuman through other means before AGI happens
3%
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
3%
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
1%
AGI is never built (indefinite global moratorium)

Duplicate of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence with user-submitted answers. An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy