
Will the Gates Foundation give more than $100mn to AI Safety work before 2025?
19
1kṀ5695resolved Jan 22
Resolved
NO1H
6H
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ302 | |
2 | Ṁ90 | |
3 | Ṁ70 | |
4 | Ṁ67 | |
5 | Ṁ19 |
People are also trading
Will non-profit funding for AI safety reach 100 billion US dollars in a year before 2030?
38% chance
Will the OpenAI Non-Profit become a major AI Safety research funder? (Announced by end of 2025)
17% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
87% chance
Will the US Federal Government spend more than 1/1000th of its budget on AI Safety by 2028?
13% chance
Will a >$10B AI alignment megaproject start work before 2030?
32% chance
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
59% chance
Will OpenAI hire an AI welfare researcher before the end of 2025?
19% chance
Will National Governments Collectively Give More than $100M a year in funding for AI Alignment by 2030?
81% chance
WIll I work (at some point) at a top AI lab on safety in the next 5 years?
73% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance
Sort by:
@NathanpmYoung Adverse consequences from militarization of AI, where the focus of the grant is not on existential or near-existential risk. That seems somewhere vaguely in between the core "AI ethics" and core "AI safety" camps? Basically, anything where the consequence is non-localized death but is not an existential / near-existential risk could be unclear.
People are also trading
Related questions
Will non-profit funding for AI safety reach 100 billion US dollars in a year before 2030?
38% chance
Will the OpenAI Non-Profit become a major AI Safety research funder? (Announced by end of 2025)
17% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
87% chance
Will the US Federal Government spend more than 1/1000th of its budget on AI Safety by 2028?
13% chance
Will a >$10B AI alignment megaproject start work before 2030?
32% chance
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
59% chance
Will OpenAI hire an AI welfare researcher before the end of 2025?
19% chance
Will National Governments Collectively Give More than $100M a year in funding for AI Alignment by 2030?
81% chance
WIll I work (at some point) at a top AI lab on safety in the next 5 years?
73% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance