Will Yudkowsky and Soares' book get on the NYT bestseller list in 2025?
631
5kṀ470k
resolved Sep 27
Resolved
YES

Will Eliezer Yudkowsky and Nate Soares book "If Anyone Builds It, Everyone Dies" get on the NYT bestseller list this year?


Verification will be based on the official NYT Best Seller lists. Currently I understand that to mean this resolves YES if it makes the online list (top 35), but I intend it to mean whatever best maps to "can write, New York Times Bestseller on the book".

Number sold question: https://manifold.markets/NathanpmYoung/how-many-copies-of-yudkowsky-and-so?play=true

Mod FYI: it is unclear whether the list is 15 or 35. Note the criteria stated above

I intend it to mean whatever best maps to "can write, New York Times Bestseller on the book".

Get
Ṁ1,000
to start trading!

🏅 Top traders

#NameTotal profit
1Ṁ26,758
2Ṁ24,166
3Ṁ7,790
4Ṁ4,794
5Ṁ4,580
Sort by:
sold Ṁ80 NO

Didn't expect that, but good outcome!

Never been happier to be wrong!

Oops. I was wrong.

Congrats to the Mikes @ms @MichaelWheatley

@bens I mean mostly congrats to the authors, to the team at MIRI and countless others who helped, and to humanity, this is a huge amount of dignity points! Way over 99% of my happiness is not due to the mana I won!

Congrats to the team! It seems they have met the moment.

sold Ṁ364 NO

@ms congrats! we’re doomed! 🌈🍾🎉

sold Ṁ44 NO

@ms beat joe manchin, but far behind matthew mcconaughey's ("poems and prayers") 2nd week—such is life

#8 hardcover non-fiction, #7 in combined print & ebook

@MachiNi a little bit less doomed, now!

@ms I don’t know about that

@MachiNi seems true in expectation. Going from not having a book to having a book makes the AI moratorium position infinitely more accessible to large, influential swathes of society.

@jim assuming the book plays any causal role whatsoever and that a moratorium effectively prevents ASI. That’s what I doubt.

@MachiNi Definitely neither are guaranteed, but I wouldn't give super slim odds either. Also, it would certainly delay it significantly, which is the most important part, right?

@DavidHiggs Doesn’t ‘if anyone builds it then everyone dies’ hold independently of when it happens? A delay is good for those who live in the interim but everyone will still die.

As for me, I consider the odds of the book having a meaningful impact to be basically nil. It’s simple. If a moratorium is possible, it’s likely overdetermined—the book is not necessary. On the other hand, if a moratorium is extremely unlikely (as I think), then the book will be toothless—it’s not sufficient. More generally, if ASI can be built (or grow), it will, and the book can do nothing about it. If it can’t be built (or grow), then we’re fine and the book serves no need.

@MachiNi that’s just the title, the full statement is more like “If anyone builds it using anything like current methods & level of understanding, then everyone dies.” We will develop new methods and deeper understanding over time, probably closer to years/a couple decades than most of a century. The moratorium is to buy time for that scientific progress.

As for likelyhoods, maybe you’re right, I’m not sure how good my models of social and political change are. Certainly it’s not all that likeky.

@DavidHiggs oh I read the book. I disagree. I don’t think they believe we can build safe ASI at all. Their central thesis is conceptual more than empirical, it seems to me.

@MachiNi I actually haven't gotten a chance to read it yet, so maybe you're right. It would be a significant change in what I thought the authors views were based on their other writings.

@MachiNi you’re wrong, the authors believe that in principle, it is possible to build safe ASI.

@ms in a totally different world in which we don’t build it like we do, I guess. Please point me to the passage in the book where they concede that we could, if we implemented a moratorium. I may actually have missed it between the parables.

@ms happy to concede: in principle. It’s conceptually possible, if we start building AI completely differently from scratch.

@MachiNi I re-read the last three chapters. Man, if they think it’s theoretically possible (they do, it’s just really hard!), they don’t sell it well. Almost everything they write supports the conclusion that humanity as we know it is constitutionally incapable of solving the alignment problem. I very much doubt that a moratorium, if it blocked ASI, would make safe ASI subsequently (when?!) tractable.

© Manifold Markets, Inc.TermsPrivacy