[Independent MC Version] If Artificial General Intelligence has an okay outcome, what will be the reasons?
4
1kṀ743
2200
45%
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
26%
AIs never develop coherent goals
15%
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
22%
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
16%
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
16%
Alignment is impossible. Sufficiently smart AIs know this and thus won't improve themselves and won't create successor AIs, but will instead try to prevent existence of smarter AIs, just as smart humans do.
37%
Hacks like RLHF-ing self-disempowerment into frontier models work long enough to develop better alignment methods, which in turn work long enough to ... etc; we keep ahead of 'alignment escape velocity'
37%
Aligned AI is more economically valuable than unaligned AI. The size of this gap and the robustness of alignment techniques required to achieve it scale up with intelligence, so economics naturally encourages solving alignment.
34%
A smaller AI disaster causes widespread public panic about AI, making it a bad legal or PR move to invest in powerful AIs without also making nearly-crippling safety guarantees
31%
High-level self-improvement (rewriting code) is intrinsically risky process, so AIs will prefer low level and slow self-improvement (learning), thus AIs collaborating with humans will have advantage. Ends with posthumans ecosystem.

This is an independent (aka unlinked) version of: https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1

I will add new options based on the following polls:



https://manifold.markets/4fa/which-answers-should-be-kept-when-m


https://manifold.markets/4fa/google-form-in-description-which-an

Original market's description:

An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.

Market context
Get
Ṁ1,000
to start trading!
Sort by:
bought Ṁ10 NO

Nice smooth colour spectrum on the answers 👌

bought Ṁ10 NO

Important market!

© Manifold Markets, Inc.TermsPrivacy