Will any prominent e/accs switch sides to Notkilleveryoneism after examining the arguments in detail?
17
1kṀ360
2028
74%
chance

e/acc (effective accelerationism) - the belief that AI alignment is not something we need to worry about, as our benevolent and highly competent corporate/government overlords will do exactly as much of it as necessary, when necessary

Notkilleveryoneism - the belief that sufficiently-advanced AI could kill us all, and therefore we need to devote a large amount of resources towards AI alignment research. May also include the belief that there should be a moratorium on cutting-edge AI gain-of-function research ("capabilities research" for short). Leading proponent of this view: Eliezer Yudkowsky

Side node: These are not actually the only two positions on existential risks to humanity from AIs, there are also negative utilitarianism, which holds that if unfeeling AIs replaced humanity, other things being equal, this would be an improvement in the very long term because it would mean less suffering, and AI supremacism, which holds that it is good for humanity to be obsoleted and entirely replaced by "superior" life forms such as AIs. So it's possible that an e/acc could switch to one of the two fringe positions instead of switching to Notkilleveryoneism - but the fringe positions are rare at the moment, and don't seem particularly compatible with e/acc.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy