Will S-risk prevention get $10m+ from EA funders before 2030?
Basic
5
Ṁ91
2030
67%
chance

To resolve this question, in 2030 I will look at publicly-available grantmaking documentation (like this Openphil website, for example), adding up all the grants between October 2023 and January 2030 that support S-risk prevention projects, and resolve YES if the grand total exceeds ten million US dollars.

In order to count, a donation has to be aimed at S-risk specifically (and/or S-risk should be the primary motivation for a given project). General interventions that seem likely to make the future go better in a variety of ways (like improving international diplomacy, or making progress on AI alignment) won't count if they aren't primarily motivated by S-risk concerns.

"EA funders" means places like OpenPhil, LTFF, SFF, Longview Philanthropy, Founders Fund, GiveWell, ACX Grants, etc. Some example "EA-adjacent" funding sources that wouldn't count, even if their money goes directly to this cause area: Patrick Collison, Yuri Milner, the Bill & Melinda Gates Foundation, Elon Musk, Vitalik Buterin, Peter Thiel. This is obviously a fuzzy distinction (what if one of the aforementioned billionares becomes noticeably more EA-influenced by 2030? etc), but I'll try my best to resolve the question in the spirit of reflecting how the EA community has grown over time.

For markets about other cause-area-candidates (like stable totalitarianism and human intelligence augmentation!), check out the "New EA Cause Area?" tag!

Get
Ṁ1,000
and
S1.00