
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
29
1kṀ5232040
79%
chance
1H
6H
1D
1W
1M
ALL
Resolves yes if, in my judgement, that’s the reason for the award, even if those exact terms don’t appear in the official announcement.
If I’m not around, someone else can resolve it in that spirit.
Past winners and rationales: https://en.wikipedia.org/wiki/Turing_Award
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will an AI get a Nobel Prize before 2050?
28% chance
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
40% chance
Will a Nobel prize be awarded for the invention of AGI before 2050?
26% chance
Who will be the next Turing Award Winner for research in AI/ML?
Will there be a well accepted formal definition of value alignment for AI by 2030?
25% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will OpenAI publicly state that they know how to safely align a superintelligence before 2030?
21% chance