I've read a lot of Yudkowsky's and Soares' writing, and find popular explainers of AI risk pretty tiresome by now.
Value as in "believing it was worth at least my hourly wage in reading time plus the cost".
Possible things that would make this true:
* Extensive descriptions of MIRI updates on machine learning, LLMs
Some new agent foundations results descriptions
Some more detailed descriptions of Lawfulness results & the like
Update 2025-05-14 (PST) (AI summary of creator comment): The creator notes that the book could be valuable, even without new fine technical details of theory, if it provides a better understanding of:
How various concepts fit together
How these concepts relate to current AI paradigms
Which pieces are necessary or unnecessary details of disaster
Positive information:
> Will anything in this book be new to you, if you've been following me on Twitter for a while? Yes with quite high probability -- not in fine technical details of theory, maybe, but in terms of understanding how it all fits together, and how it relates to current AI paradigms, and which pieces are necessary or unnecessary details of disaster.
> I'm guessing this because actual MIRI employees say they got a lot out of the book! And if you've merely been reading most of what MIRI publishes for the last twenty years plus my Twitter, and are not a MIRI employee, there might be actual new theory and insight there.
I feel skeptical towards this claim.