
Did Alan Turing believe there was a non-negligible risk that AI would exterminate or enslave humanity?
15
1kṀ2822100
59%
chance
1H
6H
1D
1W
1M
ALL
Inspired by all the people trying to back up their claims of AI risk by saying Turing agreed with them, such as here and here.
"Non-negligible" means that Turing would have supported at least a few thousand person-hours by academics and computer scientists looking into the issue further.
"Exterminate or enslave" will include any serious X-risk or S-risk. Anything that would make humanity much worse off than it is now.
Resolves at some distant point in the future when these issues are less emotionally charged and people are less likely to engage in tribalism and motivated reasoning, and we hopefully have learned more about Turing's life and beliefs.
The main two passages people quote:


This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will humanity wipe out AI?
10% chance
Will humanity wipe out AI x-risk before 2030?
10% chance
Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?
18% chance
Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
7% chance
When (if ever) will AI cause human extinction?
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
27% chance
Will AI cause an existential catastrophe (Bostrom or Ord definition) which doesn't result in human extinction?
25% chance
Contingent on AI being perceived as a threat, will humans deliberately cause an AI winter before 2030?
33% chance
Will AI wipe out humanity before the year 2900?
20% chance
Will humanity wipe out AI before the year 2030?
7% chance