I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
12
65
แน272แน350
2026
59%
chance
1D
1W
1M
ALL
Let's consider any AI alignment researcher who has written a sequence in either the Alignment Forum library or the LessWrong library "high profile" for the purposes of this question.
Get แน200 play money
Related questions
Related questions
Will a major AI company acknowledge the possibility of conscious AIs by 2026?
72% chance
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
48% chance
Will an AI co-author a mathematics research paper published in a reputable journal before the end of 2026?
59% chance
Will I (co)write an AI safety research paper by the end of 2024?
49% chance
I am an AI safety researcher with a background in machine learning engineering and neuroscience. Will I personally be able to program and train an AGI for less than $10k by 2030?
23% chance
Will a large scale, government-backed AI alignment project be funded before 2025?
16% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
35% chance
Leaks or press releases that AIs are doing end to end AI research by 2026
62% chance
The AI company with the smartest AI system by the end of 2026
Will a leading AI organization in the United States be the target of an anti-AI attack or protest by the end of 2024?
30% chance