Get Ṁ200 play money
Sort by:
In every field of relevance—early is the same as wrong.
Will doomers look like fools in 2030?
An honest poll will occur as to whether “AI” has demonstrated any semblance of existential risk
(Or it turns out that domesticated models are as dangerous as domesticated cats.
Same as the GPT-2, GPT-3, and now GPT-4 releases where nothing major happens, all the worry cases fail to occur.)
More related questions
Related questions
Is AI Safety a grift?
33% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
35% chance
Will the AI Safety summit in France of 2024 weaken AI Safety commitments?
61% chance
Will there be a disaster caused by open source developers doing unsafe things with AI by 2028?
65% chance
Will AI be useful, but not really change the world... by 2030 ?!
32% chance
According to 20 AI safety experts, what is the most promising research direction in AI safety today?
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
64% chance
Which eventuality is more dangerous about AI?
POLL
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
38% chance
Would it be safer for humanity if all AI was open source?
POLL