
Eliezer Yudkowsky is impressed by a machine learning model, and believes that the model may be very helpful for alignment research, by the end of 2026
63
1.1kṀ86022026
25%
chance
1H
6H
1D
1W
1M
ALL
EDIT
Let's operationalize it this way:
"If Eliezer either expresses that the model may be very helpful for alignment research, or Eliezer strongly implies that he feels this way (eg. by indicating that it is more useful than an additional MIRI-level researcher), then we consider this market resolved to YES."
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Eliezer Yudkowsky is found to have performed prediction market fraud by the end of 2025
8% chance
Eliezer Yudkowsky calls me “promising” before the end of 2026
12% chance
Will we solve AI alignment by 2026?
1% chance
Will Eliezer Yudkowsky work for any major AI-related entity by 2027?
20% chance
Will Eliezer Yudkowsky work for Anthropic OR OpenAI in any role before the end of 2025?
2% chance
Will Eliezer Yudkowsky work for OpenAI OR Humane in any role before the end of 2025?
1% chance
At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?
54% chance
Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
7% chance
Which well-known scientist will Eliezer Yudkowsky have a long recorded conversation with about AI risk, before 2026?
Eliezer Yudkowsky irrelevant by 2030
54% chance