Will there be another well-recognized letter/statement on AI risk by July 31, 2023?
36
678
Ṁ87KṀ370
resolved May 30
Resolved
YES1D
1W
1M
ALL
Resolves YES if there is a similar letter as the Pause Letter released by the Future of Life Institute by end of July 2023. Resolves NO otherwise.
We'll call it well-recognized if it gets signed by at least 10 big public figureheads in AI, and at least one Turing award winner. It may address any sorts of risks from powerful AI.
Get Ṁ200 play money
Related questions
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ6,361 | |
2 | Ṁ3,974 | |
3 | Ṁ1,689 | |
4 | Ṁ582 | |
5 | Ṁ93 |
Related questions
Related questions
Will there be a well-recognized letter/statement on rationalussy by June 9, 2024?
46% chance
Will there be significant protests calling for AI rights before 2030?
50% chance
Will there be another blatant demonstration of AI risks, comparable to Bing Chat, by 2024?
37% chance
Will any AI cause an international incident before August 2024? (M1300 subsidy)
22% chance
Will I (co)write an AI safety research paper by the end of 2024?
49% chance
Will AI xrisk seem to be handled seriously by the end of 2026?
21% chance
Will there be a critical vulnerability discovered by AI by the end of 2025?
77% chance
If humanity survives to 2100, what will experts believe was the correct level of AI risk for us to assess in 2023?
35% chance
Will a international AI watchdog body enter into force by May 30, 2027?
72% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
38% chance