Will Eliezer Yudkowsky and Nate Soares book "If Anyone Builds It, Everyone Dies" get on the NYT bestseller list this year?
Verification will be based on the official NYT Best Seller lists. Currently I understand that to mean this resolves YES if it makes the online list (top 35), but I intend it to mean whatever best maps to "can write, New York Times Bestseller on the book".
Number sold question: https://manifold.markets/NathanpmYoung/how-many-copies-of-yudkowsky-and-so?play=true
Update 2025-05-18 (PST) (AI summary of creator comment): The criteria for the book appearing on the NYT Bestseller list are:
List frequency: weekly
Required placement: top 35
Eligible lists: any category
People are also trading
@Alfie that was one of the bad decisions made with this market—it should just have been published list, period. There is an extended list, but it’s not public—it’s not ‘online’ contrary to what the description says. Agents or publishers may get the deets. A book that doesn’t make the cutoff for the published list may still be on the extended list but doesn’t get to put ‘New York Times Bestseller’ on their cover—so it doesn’t fit that part of the criterion. In fact I’m not sure it even gets to be called a New York Times Bestseller, but promotional materials can mention its ranking. How we are going to get the info if the book is, say, 26th on the list, I have no idea. I guess the authors could tell us, if they know, but it’d be nice to be able to verify it independently.
@MachiNi I think (might be wrong) that the ‘online only’ list is the combined print and e-book but it’s also just a top 15.
Stephen Marche not a fan of the book:
Critics of A.I. doomerism maintain that the mind-set suffers from several interlocking conceptual flaws, including that it fails to define the terms of its discussion — words like “intelligence” or “superintelligence” or “will” — and that it becomes vacuous and unspecific at key moments and thus belongs more properly to the realm of science fiction than to serious debates over technology and its impacts. Unfortunately, “If Anyone Builds It, Everyone Dies” contains all these flaws and more. The book reads like a Scientology manual, the text interspersed with weird, unhelpful parables and extra notes available via QR codes.
…
Following their unspooling tangents evokes the feeling of being locked in a room with the most annoying students you met in college while they try mushrooms for the first time.
@ms shouldn’t they want to be able to convince people like him rather than people already favorably disposed to their case? Seems like a better test.
I’m not quite sure how closely he’s read the book
I expect that most people’s reactions would be different from this one
Most people are not already against the idea that AI might be very dangerous
@ms that all sounds plausible to me. All I’m saying is the test that matters , if we’re really talking about doom, is convincing the people who are both most difficult to convince and have relevant leverage. Maybe convincing Marche doesn’t matter because he has no leverage. But that they failed to convince him signals something about their ability to convince other skeptics who might have more leverage.
No, AI isn’t going to kill us all, despite what this new book says
The arguments made by AI safety researchers Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies are superficially appealing but fatally flawed, says Jacob Aron
@MachiNi If I were EY+NS, the primary goal wouldn't be to convince people already very knowledgeable about modern AI. Opinions among AI insiders have to a large extent ossified. I would be aiming to convince:
- Smart engineers or scientists working outside AI, especially software engineer
- People working in policy, who may or may not have studied much math in college.
among others.
@MachiNi I think Yudkowsky's and Soares's aim with the book is to convince the silent majority who is otherwise likely to oppose further AI capability development, but doesn't have this narrative as fire- and this in turn would be another pressure on policymakers. Something like this framing is not so uncommon in the AI safety 'community'.
@ms I don’t think LW / Rationalists are a cult but what members of one alleged cult say about members of another alleged cult is not dispositive of whether any of them are actually members of a cult or right about anything. Qua members of a cult they are epistemically corrupt.
@ms I didn’t say it didn’t happen to you. What I’m saying is that what Scientology says is completely irrelevant because the well is poisoned.
“we recognize not-like-our-scriptures on sight”
(it’s a joke! sorry 🥺; reverse stupidity is indeed not intelligence, as the alleged leader of our cult once wrote in our very different scriptures)
@MachiNi yeah if you don’t buy a copy for everyone you know and don’t make a donation to miri and don’t get tested for hidden ai safety abilities on a special device you’re not a real rationalist