Resolution criteria
Each answer option will resolve to "Yes" if the individual or organization publicly endorses "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares. An endorsement is defined as a public statement of support or recommendation for the book, which can be verified through reputable sources such as official press releases, interviews, or social media posts. A repost of someone else's opinion counts (e.g., if OpenAI retweets a recommendation from Sam Altman, that counts as an endorsement). For media, unless an "editorial" is added, endorsement in a feature article counts. A mixed opinion that overall recommends the book and can be cited for a blurb counts. If no such endorsement is made by the resolution date, the answer will resolve to "No." The market will close on December 16, 2025, three months after the book's release date, and will resolve based on endorsements made up to that date.
Background
Eliezer Yudkowsky is an American artificial intelligence researcher and writer, known for his work on AI safety and decision theory. He is the founder of the Machine Intelligence Research Institute (MIRI). Nate Soares is the president of MIRI and has co-authored several papers on AI safety. Their upcoming book, If Anyone Builds It, Everyone Dies, is scheduled for publication by Little, Brown and Company on September 16, 2025.
People are also trading
Why is Greg Egan 50%?
@ms https://www.vox.com/future-perfect/414087/artificial-intelligence-openai-ai-2027-china
Vice President JD Vance has reportedly read AI 2027, and he has expressed his hope that the new pope — who has already named AI as a main challenge for humanity — will exercise international leadership to try to avoid the worst outcomes it hypothesizes. We’ll see.
Dude thinks the pope is gonna save us lmao, we're toast
@TheAllMemeingEye I mean it makes sense that a high-level cleric might know some powerful spells to stop AGI from killing everyone
@TheAllMemeingEye Tbh, if the pope himself came out and said "AI is going to kill us all unless we do something", that would probably have a very large impact. Imo, that's a step above "making the headlines", probably past smashing the mainstream overton window to pieces (moreso than statements from people like Trump or Guterres would be), and possibly all the way to a shift in the global conversation.
@UnspecifiedPerson I fear his approach may be a continuation of the current strategy of praying it goes ok and writing extremely long extremely low-news-coverage posts that from a momentary skim seem to talk about how AI isn't truly smart like God's in-his-image creation surely is (https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html ).
However, if as you say he does the sane thing of leveraging his position in advocacy of the actual AI safety movement in a publicly engaging way, then that might actually help a lot.
[Deleted]