Will Eliezer Yudkowsky believe xAI has had a meaningful positive impact on AI alignment at the end of 2024?
13
1kṀ1984Dec 31
3%
chance
1H
6H
1D
1W
1M
ALL
Just like in the Dan Hendrycks market, it'd be great if Eliezer could provide/suggest some likely criteria for this!
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will Eliezer Yudkowsky work for any major AI-related entity by 2027?
20% chance
Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
10% chance
At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?
51% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Eliezer Yudkowsky is impressed by a machine learning model, and believes that the model may be very helpful for alignment research, by the end of 2026
15% chance
Will Eliezer Yudowsky become romantically involved with an AI before 2030?
7% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
xAI builds truth-seeking AI before 2027?
35% chance
Will xAI stop working on AI research by 2029?
24% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance