Will an unaligned AI or an aligned AI controlled by a malicious actor create a "wake-up call" for humanity on AI safety?
10
29
Ṁ131Ṁ190
2050
68%
chance
1D
1W
1M
ALL
A "wake-up call" is an incident which makes the surviving humans on Earth realise that AI alignment is actually something that humanity should care about.
If no humans survive, it is not a wake-up call, it is the existential AI risk that we have been warning about.
So in other words, will an AI or group of AIs kill some number of people less than the total number of people on earth, or create some other kind of disaster, that makes people say "Holy shit! The AI risk people were right all along!"
Get Ṁ200 play money
More related questions
Related questions
Contingent on AI being perceived as a threat, will humans deliberately cause an AI winter before 2030?
44% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
37% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
35% chance
Is AI Safety a grift?
33% chance
Will humans deliberately cause an AI winter?
24% chance
Will there be a disaster caused by open source developers doing unsafe things with AI by 2028?
65% chance
Will misaligned AI kill >50% of humanity before 2050?
18% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
51% chance
IF an existential crisis is caused as a result of AI misalignment, THEN will it be from an AI uprising? (Yes, really)
50% chance
Will AI decide to uncouple its destiny from humanity's?