Will I gain any significant insight from reading Eliezer Yudkowsky's new book?
13
100Ṁ282
Sep 16
53%
chance
8

Eliezer Yudkowsky and Nate Soares are publishing a new book (release date 16th of September this year): https://x.com/ESYudkowsky/status/1922710969785917691

Quote (twitter thread):
> EY: "And if you've merely been reading most of what MIRI publishes for the last twenty years plus my Twitter, and are not a MIRI employee, there might be actual new theory and insight there."

This market resolves YES if I indeed find new theory and insight in the book, of at least as much significance as the median significant item in the "List of Lethalities" (https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities).

This market resolves NO if I do not leave with any new theory or insights that achieve this level of significance.

I will read the book and likely will read some book reviews and perhaps even comments on manifold.

  • Update 2025-05-14 (PST) (AI summary of creator comment): The creator has provided examples of their personal assessment for "significant new theory and insight" from other works by Eliezer Yudkowsky or related material:

    • "The Sequences" did provide them with significant theory and insights.

    • "Inadequate Equilibria" tentatively did not, primarily because the ideas were likely not new to the creator at the time of reading.

    • "Planecrash" tentatively did not. The creator described the experience as more akin to "doing exercise sheets and fixing small errors in my understanding," which they state does not count as significant new theory or insight relevant to AI x-risk. Additionally, content that might have been significant was not considered new as it was already covered by other sources like the "List of Lethalities."

Get
Ṁ1,000
to start trading!
Sort by:

Have you read HPMOR, The Sequences, and Inadequate Equilibria? Did you gain significant insight from them?

@TheAllMemeingEye I read all three. I got significant theory and insights from the Sequences; unsure about HPMOR; tentatively No for inadequate equilibria, bc I was exposed to similar ideas before I read the book, unless I'm misremembering and read the book or parts of it first (it's been some years).

Also tentatively No for Planecrash, reading it was more similar to doing exercise sheets and fixing small errors in my understanding, without learning any significant theory or insights that is relevant for AI x-risk. E.g. a complete list of our knowledge of why we can't just do corrigibility is significant, but the List of Lethality covered most of it first already, so right now I don't think Planecrash added significant new insights.

@Joern thanks 👍

Btw where did you come across the Inadequate Equilibria ideas first?

@TheAllMemeingEye uff, I don't remember. Vague association is slate star codex + LW posts that weren't book chapters + some independent research into the efficient market hypothesis + evolutionary equilibria + independent research into why German politics produces visible-to-me suboptimal outcomes. Like, I had other, similar examples to rely on and check inad.equi. against while reading it. Some lived experience at being better than systems of large people (due to being very smart). Also I had enough math background and was aware abstractly that lots of efficiency/adequacy theorems have non-trivial requirements.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules