It's end of 2025, a global AI moratorium is in effect, Eliezer Yudkowsky endorses it. What were its decisive causes?
0
1002026
1D
1W
1M
ALL
No answers yet
Endorsement by Eliezer Yudkowsky has to be public
Has to clearly describe moratorium as "going far enough" by his own criteria, not just e.g. "a step in the right direction" (and also not "too far" or "wrongly far" that it's worse than if it didn't exist)
Answers should be events or actions by individuals, groups, countries, etc. verified or widely believed to have been decisive, intentionally or not, in the moratorium coming into effect, i.e. "wouldn't have happened without it"
It's my first market of this type, I'm open to suggestions on wording, resolution criteria, and other parameters
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Related questions
Related questions
If Elon Musk does something as a result of his AI angst by 2025, will Eliezer Yudkowsky judge it to be a positive or neutral initiative (as opposed to negative)?
12% chance
Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
5% chance
Will there be a global "pause" on cutting-edge AI research due to government regulation by 2025?
1% chance
Will Yudkowsky agree that his "death with dignity" post overstated the risk of extinction from AI, by end of 2029?
15% chance
Which well-known scientist will Eliezer Yudkowsky have a long recorded conversation with about AI risk, before 2026?
At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?
55% chance
Will Eliezer Yudkowsky believe xAI has had a meaningful positive impact on AI alignment at the end of 2024?
3% chance
Conditional on humanity surviving to 2035, will a global AI pause have been enacted?
11% chance
Will any world leader call for a global AI pause by EOY 2027?
70% chance
Will Y. Lecun turn AI doomer by end 2025 ?
12% chance