
Will there be another well-recognized letter/statement on AI risk by May 31, 2023?
169
2.3kṀ1.3mresolved May 30
Resolved
YES1H
6H
1D
1W
1M
ALL
Resolves YES if there is a similar letter as the Pause Letter released by the Future of Life Institute by end of May 2023. Resolves NO otherwise.
We'll call it well-recognized if it gets signed by at least 10 big public figureheads in AI, and at least one Turing award winner. It may address any sorts of risks from powerful AI.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ237,288 | |
2 | Ṁ189,481 | |
3 | Ṁ28,073 | |
4 | Ṁ16,980 | |
5 | Ṁ11,534 |
People are also trading
Related questions
Will >90% of Elon re/tweets/replies on 19 December 2025 be about AI risk?
6% chance
Will AI existential risk be mentioned in the white house briefing room again by May 2029?
87% chance
Will Pope Leo XIV cite in his text the signatories of the CAIS Statement on AI Risk before 2028?
In 2030, will we think FLI's 6 month pause open letter helped or harmed our AI x-risk chances?
Will AI xrisk seem to be handled seriously by the end of 2026?
24% chance
Will a more significant protest calling for a pause in AI than the pause letter by May 2029 (or an actual pause)?
86% chance
Will there be a coherent AI safety movement with leaders and an agenda in May 2029?
79% chance
Will we have a sufficient level of international coordination to ensure that AI is no longer threat before 2030?
22% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
42% chance
Will a international AI watchdog body enter into force by May 30, 2027?
66% chance