
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
12
1kṀ2802026
59%
chance
1D
1W
1M
ALL
Let's consider any AI alignment researcher who has written a sequence in either the Alignment Forum library or the LessWrong library "high profile" for the purposes of this question.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Related questions
Related questions
Will we solve AI alignment by 2026?
8% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
87% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Will I (co)write an AI safety research paper by the end of 2024?
45% chance
Will someone commit terrorism against an AI lab by the end of 2025 for AI-safety related reasons?
20% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will the OpenAI Non-Profit become a major AI Safety research funder? (Announced by end of 2025)
35% chance
Will there be serious AI safety drama at Meta AI before 2026?
58% chance
Will non-profit funding for AI safety reach 100 billion US dollars in a year before 2030?
38% chance
Will Eliezer Yudkowsky believe xAI has had a meaningful positive impact on AI alignment at the end of 2024?
3% chance