If AI more intelligent than humans appears to pose risks to humanity's existence, freedoms, #. and/or happiness, should
48
80
resolved May 8
All governments should ban development.
All governments regulate it extensively charging costs to the industry which will slow down development and hopefully make it safer.
All governments regulate it, taxpayer to pay. May not slow it down as much but taxpayers have to bear regulation cost and perhaps it becomes more susceptible to becoming light touch ineffective regulation.
Let it develop. If it wipes us out, this may be a sensible intelligent choice and we should work towards expanding consciousness in the world even if it is not our own.
Let it develop. Ban or regulation will not work, only drive it to develop in criminal hands which would likely be riskier.
Other - please comment

Ban? Regulate? Nothing would work? What do you think?

Comments welcome.

Get แน€600 play money
Sort by:

I accidentally voted for the first option. I should have voted for the last option. I think it should not be regulated at all because the risks posed by regulation far outweigh the risks posed by a lack of regulation.

I'm missing the option: "Humanity will observe passively for now. Not until AI's potentially catastrophic effects are proven by the scientific community will we start trying to take action. However, by then, AI is already too ingrained in the world economy, and through lobbyism and populist political discourse, humanity will split into AI deniers and AI alarmists, creating a public tugowar that effectively renders humanity unable to deal with AI's negative effects until it's spiralled out of control, and it's too late."

@GazDownright I meant to phrase it as what should we do as opposed to what will actually happen but I may have messed it up a bit.

There is the 'other please comment', which you have :)

@ChristopherRandles Other: "Let it develop. This is another tragedy of the commons scenario; we're not wired to prioritize global good over personal/local gain; regulation is futile."

Regulate specific applications, aiming at safety, not at slowing development and proliferation.

Non-proliferation made sense with nukes because there were few or no great benefits to humanity from the nuclear arms race. That is different with AI -- we need massive incremental proliferation. Safety in a world with AI requires more AI, not less.

More related questions