
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
14
1kṀ630Dec 31
40%
chance
1H
6H
1D
1W
1M
ALL
Let's consider any AI alignment researcher who has written a sequence in either the Alignment Forum library or the LessWrong library "high profile" for the purposes of this question.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will the OpenAI Non-Profit become a major AI Safety research funder? (Announced by end of 2025)
38% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
AI safety community successfully advocates for a global AI development slowdown by December 2027
12% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will non-profit funding for AI safety reach 100 billion US dollars in a year before 2030?
38% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
40% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will someone commit violence in the name of AI safety by 2030?
60% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
