
Will there be another well-recognized letter/statement on AI Risk by Aug 31, 2023?
57
1.1kṀ14kresolved Sep 7
Resolved
NO1H
6H
1D
1W
1M
ALL
Resolves YES if there is a similar letter as the Statement on AI Risk from the Center for AI Safety or the Pause Letter released by the Future of Life Institute by end of August 2023. Resolves NO otherwise.
We'll call it well-recognized if it gets signed by at least 10 big public figureheads in AI, and at least one Turing award winner. It may address any sorts of risks from powerful AI.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ520 | |
2 | Ṁ499 | |
3 | Ṁ100 | |
4 | Ṁ66 | |
5 | Ṁ59 |
People are also trading
Related questions
Will >90% of Elon re/tweets/replies on 19 December 2025 be about AI risk?
6% chance
Will AI xrisk seem to be handled seriously by the end of 2026?
24% chance
Will Pope Leo XIV cite in his text the signatories of the CAIS Statement on AI Risk before 2028?
In 2030, will we think FLI's 6 month pause open letter helped or harmed our AI x-risk chances?
Will the US AI Safety Institute be rebranded as the AI Security Institute by the end of 2025?
6% chance
Will AI existential risk be mentioned in the white house briefing room again by May 2029?
87% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
42% chance
Will we have a sufficient level of international coordination to ensure that AI is no longer threat before 2030?
22% chance
Will a more significant protest calling for a pause in AI than the pause letter by May 2029 (or an actual pause)?
86% chance