
How will "Governance of superintelligence" by OpenAI cause Yudkowsky to update?
8
420Ṁ113resolved Jun 22
1H
6H
1D
1W
1M
ALL
100%61%
He will keep it a secret
20%
Update towards higher probability that we all die (p(doom) increase)
18%
Update towards lower probability that we all die (p(doom) decrease)
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ53 | |
2 | Ṁ5 | |
3 | Ṁ1 | |
4 | Ṁ0 |
People are also trading
Related questions
At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?
64% chance
Will Eliezer Yudkowsky work for OpenAI OR Humane in any role before the end of 2025?
5% chance
Will Eliezer Yudkowsky work for Anthropic OR OpenAI in any role before the end of 2025?
2% chance
Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
11% chance
Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?
18% chance
Which well-known scientist will Eliezer Yudkowsky have a long recorded conversation with about AI risk, before 2026?
Will Eliezer Yudkowsky work for any major AI-related entity by 2027?
20% chance
Will this Yudkowsky tweet on AI babysitters hold up by Feb 2028?
49% chance
If Elon Musk does something as a result of his AI angst by 2025, will Eliezer Yudkowsky judge it to be a positive or neutral initiative (as opposed to negative)?
12% chance
Will Eliezer Yudkowsky believe xAI has had a meaningful positive impact on AI alignment at the end of 2024?
3% chance