If AI more intelligent than humans appears to pose risks to humanity's existence, freedoms, #. and/or happiness, should
48
resolved May 8
All governments should ban development.
All governments regulate it extensively charging costs to the industry which will slow down development and hopefully make it safer.
All governments regulate it, taxpayer to pay. May not slow it down as much but taxpayers have to bear regulation cost and perhaps it becomes more susceptible to becoming light touch ineffective regulation.
Let it develop. If it wipes us out, this may be a sensible intelligent choice and we should work towards expanding consciousness in the world even if it is not our own.
Let it develop. Ban or regulation will not work, only drive it to develop in criminal hands which would likely be riskier.
Other - please comment
Ban? Regulate? Nothing would work? What do you think?
Comments welcome.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Public opinion, late 2025: Out-of-control AI becoming a threat to humanity, a real threat?
When (if ever) will AI cause human extinction?
Will humanity wipe out AI?
10% chance
Contingent on AI being perceived as a threat, will humans deliberately cause an AI winter before 2030?
33% chance
Will AI decide to uncouple its destiny from humanity's?
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
Will humanity wipe out AI x-risk before 2030?
10% chance
If AI wipes out humanity by 2030, will it regret its decision?
30% chance
Will AI cause human extinction before 2100 (and how)?
Will AI wipe out humanity before the year 2030?
3% chance