Will a large-scale, Eliezer-Yudkowsky-approved AI alignment project be funded before 2025?
53
1.1kṀ21kresolved Jan 2
Resolved
NO1H
6H
1D
1W
1M
ALL
If Eliezer thinks the project is hopelessly misguided, that's no good. If he thinks the project isn't going to work but was at least a noble effort, that's sufficient to resolve this to YES. In other words the project should be something relativly similar to what he would have done with those resources if he were put in charge. (Or perhaps he says it's a better idea than what he would have thought of himself.)
The total amount of funding must be at least $3 billion. In the event of gradual funding over time, this market can resolve YES if the project ever meets all three criteria at any point in its life.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ2,577 | |
2 | Ṁ391 | |
3 | Ṁ222 | |
4 | Ṁ213 | |
5 | Ṁ37 |
People are also trading
Related questions
Will a >$10B AI alignment megaproject start work before 2030?
38% chance
Will Meta AI start an AGI alignment team before 2026?
45% chance
Will Eliezer Yudkowsky work for any major AI-related entity by 2027?
20% chance
Will National Governments Collectively Give More than $100M a year in funding for AI Alignment by 2030?
81% chance
Will the OpenAI Non-Profit become a major AI Safety research funder? (Announced by end of 2025)
17% chance
Will Eliezer Yudowsky become romantically involved with an AI before 2030?
15% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance