Eliezer Yudkowsky and Nate Soares are publishing a new book (release date 16th of September this year): https://x.com/ESYudkowsky/status/1922710969785917691
Quote (twitter thread):
> EY: "And if you've merely been reading most of what MIRI publishes for the last twenty years plus my Twitter, and are not a MIRI employee, there might be actual new theory and insight there."
This market resolves YES if I indeed find new theory and insight in the book, of at least as much significance as the median significant item in the "List of Lethalities" (https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities).
This market resolves NO if I do not leave with any new theory or insights that achieve this level of significance.
I will read the book and likely will read some book reviews and perhaps even comments on manifold.
Update 2025-05-14 (PST) (AI summary of creator comment): The creator has provided examples of their personal assessment for "significant new theory and insight" from other works by Eliezer Yudkowsky or related material:
"The Sequences" did provide them with significant theory and insights.
"Inadequate Equilibria" tentatively did not, primarily because the ideas were likely not new to the creator at the time of reading.
"Planecrash" tentatively did not. The creator described the experience as more akin to "doing exercise sheets and fixing small errors in my understanding," which they state does not count as significant new theory or insight relevant to AI x-risk. Additionally, content that might have been significant was not considered new as it was already covered by other sources like the "List of Lethalities."