Ban? Regulate? Nothing would work? What do you think?
Comments welcome.
Related questions
I'm missing the option: "Humanity will observe passively for now. Not until AI's potentially catastrophic effects are proven by the scientific community will we start trying to take action. However, by then, AI is already too ingrained in the world economy, and through lobbyism and populist political discourse, humanity will split into AI deniers and AI alarmists, creating a public tugowar that effectively renders humanity unable to deal with AI's negative effects until it's spiralled out of control, and it's too late."
@GazDownright I meant to phrase it as what should we do as opposed to what will actually happen but I may have messed it up a bit.
There is the 'other please comment', which you have :)
@ChristopherRandles Other: "Let it develop. This is another tragedy of the commons scenario; we're not wired to prioritize global good over personal/local gain; regulation is futile."
Regulate specific applications, aiming at safety, not at slowing development and proliferation.
Non-proliferation made sense with nukes because there were few or no great benefits to humanity from the nuclear arms race. That is different with AI -- we need massive incremental proliferation. Safety in a world with AI requires more AI, not less.