Will there be another well-recognized letter/statement on AI Risk by Aug 31, 2023?
57
666
1.1K
resolved Sep 7
Resolved
NO

Resolves YES if there is a similar letter as the Statement on AI Risk from the Center for AI Safety or the Pause Letter released by the Future of Life Institute by end of August 2023. Resolves NO otherwise.

We'll call it well-recognized if it gets signed by at least 10 big public figureheads in AI, and at least one Turing award winner. It may address any sorts of risks from powerful AI.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ520
2Ṁ499
3Ṁ100
4Ṁ66
5Ṁ59
Sort by:

New letter just dropped https://futureofliff.org/

predicted YES

@mkualquiera sorry, but there's no Nobel prize winner

bought Ṁ50 of YES

@palcu do we have a turing award winner?

predicted YES

Yes @Mira, you win... there's no Turing Award Winner on that list.

Even though I think LeCun could have written that letter.

predicted YES

@Mira I'm a bit salty on this though. Obvs the letter is in the spirit of the market. And LeCun saying that people 'have endorsed our open source approach' clearly points that he is responsible for it.

He just... forgot to sign it at the bottom.

predicted NO

Hmm, the statement doesn't really mention AI risk - at most it mentions it obliquely.

“We support an open innovation approach to AI. Responsible and open innovation gives us all a stake in the AI development process, bringing visibility, scrutiny and trust to these technologies. Opening today’s Llama models will let everyone benefit from this technology.”

predicted NO

Let's make a market and all buy YES so Mira spends thousands of Mana aping our position seeing our "insider trading"

Sorry guys, nice try.

If your mom signs such a letter this time too, can you please let us know?

predicted YES

@firstuserhere for being the first one to ask yes

Would a letter saying that everything is going to be okay, please remain calm, be sufficiently similar to resolve this yes?

predicted YES

@MartinRandall As long as it's directly referring to AI risk in some way, saying there's no risk is fine too.

predicted YES

@dmayhem93 I recommend rewording the market description with clarifications to address the various things that people complained about on the last one

  • Whether the signatures have to be in place by the deadline

  • Reword to make it clear that the new letter doesn't have to be published by Future of Life Institute

bought Ṁ25 of YES
bought Ṁ10 of YES

@mkualquiera I'm just a humble letter salesman