Will any prominent e/accs switch sides to Notkilleveryoneism after examining the arguments in detail?
14
190
230
2028
75%
chance

e/acc (effective accelerationism) - the belief that AI alignment is not something we need to worry about, as our benevolent and highly competent corporate/government overlords will do exactly as much of it as necessary, when necessary

Notkilleveryoneism - the belief that sufficiently-advanced AI could kill us all, and therefore we need to devote a large amount of resources towards AI alignment research. May also include the belief that there should be a moratorium on cutting-edge AI gain-of-function research ("capabilities research" for short). Leading proponent of this view: Eliezer Yudkowsky

Side node: These are not actually the only two positions on existential risks to humanity from AIs, there are also negative utilitarianism, which holds that if unfeeling AIs replaced humanity, other things being equal, this would be an improvement in the very long term because it would mean less suffering, and AI supremacism, which holds that it is good for humanity to be obsoleted and entirely replaced by "superior" life forms such as AIs. So it's possible that an e/acc could switch to one of the two fringe positions instead of switching to Notkilleveryoneism - but the fringe positions are rare at the moment, and don't seem particularly compatible with e/acc.

Get Ṁ200 play money
Sort by:

Who are "prominent e/accs"?

predicts YES

@xlr8harder has deleted their Twitter account - can I resolve this market now?

predicts YES
predicts YES
predicts YES

AI alignment is a scientific field now, like climate science, with published papers and full-time professional experts and everything.

And like climate science, there is:

  • a subset of the field that bears on the question "Is this a real problem we should be worried about?" - by which I mean, should humanity in general be worried about it, not just alignment researchers in AI companies?

  • a subset of the field that bears on the question "How bad is the problem?" - though the boundary between this and the last question is kind of fuzzy because it kinds of depends on how easily convinced a person is!

  • and, perhaps less relevantly for this market, a subset of the field that bears on the question "What can we do about the problem, and what kind of impact on the problem are we likely to have by doing those things?" although judging what kind of impact alignment techniques being thought up right now are going to have on the problem is hard, because the field is at such an early stage.

In 2012, Professor Richard Muller, who had been paid by the coal magnates, the Koch brothers, to study climate science and critique it, famously actually announced that by undertaking this task, he'd converted from a climate skeptic to someone who accepted climate science as basically correct.

My hope is that one or more e/accs, by similarly studying AI alignment in depth (although I doubt they'll be able to get alignment-ignorer Satya Nadella to fund them to do so!), will come to an analogous conclusion. There is a large body of published work on this subject, and for a layperson, I'd recommend the following (unfortunately I've only read parts of the ones that are written material, so far):

  • "The AI Does Not Hate You", a book by journalist Tom Chivers, which is an accessible introduction to the topic

  • "Superintelligence", a book by Nick Bostrom. This one is more of an academic text by a full-time scholar, but still written for a popular audience. Somewhat old now but even so, offers a range of scary arguments about why we should care about AI alignment. If you don't agree with one argument, that's no problem - he has lots, lots more - and not all of them depend on the others.

  • The YouTube channel of Rob Miles, who is a crowdfunded educator on YouTube who specialises in AI alignment and other rationalist topics. He's not a professional academic, although he does have enough expertise on AI and computer science to have been repeatedly invited on to the University of Nottingham's educational Computerphile YouTube channel where he explained AI and computer science related topics. His main channel explains AI alignment arguments and theories for a popular audience. I've watched all of those videos and I can highly recommend them - they're usually very easy for me to follow.

  • And if you're still not convinced by Chivers or even Bostrom and Miles, or you want something more recent than Bostrom's book, try Eliezer Yudkowsky's relatively recent essay "AGI Ruin: A List of Lethalities". This has certain prerequisites, however - so it's probably not appropriate to read it first, unless you're already quite familiar with some of the theoretical terms he uses.

predicts YES
bought Ṁ10 of YES

Opposite market: