I've read a lot of Yudkowsky's and Soares' writing, and find popular explainers of AI risk pretty tiresome by now.
Value as in "believing it was worth at least my hourly wage in reading time plus the cost".
Possible things that would make this true:
* Extensive descriptions of MIRI updates on machine learning, LLMs
Some new agent foundations results descriptions
Some more detailed descriptions of Lawfulness results & the like
Update 2025-05-14 (PST) (AI summary of creator comment): The creator notes that the book could be valuable, even without new fine technical details of theory, if it provides a better understanding of:
How various concepts fit together
How these concepts relate to current AI paradigms
Which pieces are necessary or unnecessary details of disaster