
Will there be another well-recognized letter/statement on AI risk by July 31, 2023?
36
resolved May 30
Resolved
YES1D
1W
1M
ALL
Resolves YES if there is a similar letter as the Pause Letter released by the Future of Life Institute by end of July 2023. Resolves NO otherwise.
We'll call it well-recognized if it gets signed by at least 10 big public figureheads in AI, and at least one Turing award winner. It may address any sorts of risks from powerful AI.
Get แน500 play money
Related questions
๐ Top traders
# | Name | Total profit |
---|---|---|
1 | แน6,361 | |
2 | แน3,974 | |
3 | แน1,689 | |
4 | แน582 | |
5 | แน93 |
Sort by:


Martin Randallbought แน10 of NO
I think the FLI letter will be the coordination point until at least August. It's not like anyone has a better idea than a training pause.
1 reply

Martin Randallpredicted NO
@MartinRandall What if there's a letter with no ideas, just a cry for help?
Sort by:
Profit
Loss














































Related questions
Will anyone very famous claim to have made an important life decision because an AI suggested it by the end of 2023?
Will >$100M be invested in dedicated AI Alignment organizations in the next year as more people become aware of the risk we are facing by letting AI capabilities run ahead of safety?
๐ Will A.I. Be Able to Make Significantly Better, "Common Sense Judgements About What Happens Next," by End of 2023?
By 2028, will there be a visible break in trend line on US GDP, GDP per capita, unemployment, or productivity, which most economists attribute directly to the effects of AI?
Will Gallup's poll on America's most important problems have at least 1% of respondents identify AI by the end of 2023?
By end of 2028, will AI be considered a bigger x risk than climate change by the general US population?