Pope Francis just came out with a call for binding global treaty to regulate AI:
Francis called for ethical scrutiny of the "aims and interest of (AI's) owners and developers" warning that some applications of AI "may pose a risk to our survival and endanger our common home," a reference to the earth.
Which other famous people will make similar statements? Feel free to add your own people.
Needs to be a public statement picked up by credible news source, said in interview, personal writings, or similar.
I might bet in this market, but will avoid taking large enough positions to impair my judgment.
"Existential risk" broadly defined as on the EA Forum Wiki:
An existential risk is a risk that threatens the destruction of the long-term potential of life.[1] An existential risk could threaten the extinction of humans (and other sentient beings), or it could threaten some other unrecoverable collapse or permanent failure to achieve a potential good state.
Note that it is a broader definition than extinction risk, and could also cover things like totalitarian lock-in. However, smaller negative effects such as discrimination, bias, significant economic damages, election tampering, etc, would not be enough to resolve YES.
Let me know if further clarifications are needed.