Suppose you can wave a magic wand and end all AI (LLMs, image models, etc). Nobody will know it was you, it magically won't disrupt any important projects, and no similar AI will ever come into existence (people might eventually create other kinds of AIs, like humanoid robots or something). Do you do it?
This is a mirror of an ACX Survey question, because I'm curious what Manifold will say.
I also have a market on what Manifold will want to ban most:
Related questions
..
@Joshua Here's a question that may affect some people's choice: Do we have to wave the wand now, or can we wait until later? I imagine that if the option is available, most people would at least wait until later to wave the wand because AI is currently nowhere near as powerful as it would need to be to pose an x-risk. If we have to wave the wand now, then some people will still wave it because it's too risky to leave open the possibility that AI evolves into something that powerful, but some might decide it's not worth waving.
@Joshua I think if you're a P(doom)er, this type of ban doesn't really help, because it leaves open the possible development of "other kinds of AIs, like humanoid robots or something." I actually don't really understand the argument for banning LLMs in that case.
@HarrisonNathan it's about timeline duration.
Likely, dangerous AI will take longer to create if all neural network research vanished from our knowledge. That gives us more time to research alignment strategies.