
Conditional on AI alignment being solved, will governments or other entities be capable of enforcing use of aligned AIs?
8
170Ṁ1722050
37%
chance
1H
6H
1D
1W
1M
ALL
If we solve alignment, we still have to make sure that people create aligned AIs and don't create unaligned AIs - at least, don't create unaligned AIs beyond a certain level of "power". This seems hard!
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Will we solve AI alignment by 2026?
1% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will an unaligned AI or an aligned AI controlled by a malicious actor create a "wake-up call" for humanity on AI safety?
69% chance
Will the 1st AGI solve AI Alignment and build an ASI which is aligned with its goals?
17% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Is AI alignment computable?
53% chance
Will Meta AI start an AGI alignment team before 2026?
45% chance
Will the US government require AI labs to run safety/alignment evals by 2025?
20% chance
Will an AI built to solve alignment wipe out humanity by 2100?
12% chance