Will there be another well-recognized letter/statement on AI risk by July 31, 2023?
36
678
370
resolved May 30
Resolved
YES

Resolves YES if there is a similar letter as the Pause Letter released by the Future of Life Institute by end of July 2023. Resolves NO otherwise.

We'll call it well-recognized if it gets signed by at least 10 big public figureheads in AI, and at least one Turing award winner. It may address any sorts of risks from powerful AI.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ6,361
2Ṁ3,974
3Ṁ1,689
4Ṁ582
5Ṁ93
Sort by:
bought Ṁ200 of YES

Does calling for a "warning system" or similar count?

predicted YES

@dp or calling for setting up some evaluations, or similar

bought Ṁ10 of NO

I think the FLI letter will be the coordination point until at least August. It's not like anyone has a better idea than a training pause.

predicted NO

@MartinRandall What if there's a letter with no ideas, just a cry for help?

https://www.safe.ai/statement-on-ai-risk