
To what extent do you think people's concerns about future risks from AI are due to misunderstandings of AI research?
10
Never closes
Almost entirely
To a large extent
Somewhat
Not much
Hardly at all
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
30% chance
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
75% chance
Public opinion, late 2025: Out-of-control AI becoming a threat to humanity, a real threat?
In January 2026, how publicly salient will AI deepfakes/media be, vs AI labor impact, vs AI catastrophic risks?
Will >90% of Elon re/tweets/replies on 19 December 2025 be about AI risk?
6% chance
OpenAI CEO doesn't think existential risk from AI is a serious concern in Jan 2026
27% chance
Are AI and its effects are the most important existential risk, given only public information available in 2021?
89% chance
Will something AI-related be an actual infohazard?
76% chance
If humanity survives to 2100, what will experts believe was the correct level of AI risk for us to assess in 2023?
38% chance
Will humanity wipe out AI x-risk before 2030?
10% chance