My honest impression, though I could be wrong and didn’t analyze the prepublication reviews in detail, is that there is very much demand for this book in the sense that there’s a lot of people who are worried about AI for agent foundations shaped reasons and want an introduction they can give to their friends and family who don’t care that much.
For example I think this review from Matt Yglesias makes the point fairly explicit? He obviously has a preexisting interest in this subject and is endorsing the book because he wants the subject to get more attention, that doesn’t necessarily mean that the book is good. I in fact agree with a lot of the books basic arguments but think I would not be remotely persuaded by this presentation if I wasn’t already inclined to agree.
there is very much demand for this book in the sense that there’s a lot of people who are worried about AI for agent foundations shaped reasons and want an introduction they can give to their friends and family who don’t care that much.
This is true, but many of the surprising prepublication reviews are from people who I don’t think were already up-to-date on these AI x-risk arguments (or at least hadn’t given any prior public indication of their awareness, unlike Matt Y).
My honest impression, though I could be wrong and didn’t analyze the prepublication reviews in detail, is that there is very much demand for this book in the sense that there’s a lot of people who are worried about AI for agent foundations shaped reasons and want an introduction they can give to their friends and family who don’t care that much.
https://x.com/mattyglesias/status/1967765768948306275?s=46
For example I think this review from Matt Yglesias makes the point fairly explicit? He obviously has a preexisting interest in this subject and is endorsing the book because he wants the subject to get more attention, that doesn’t necessarily mean that the book is good. I in fact agree with a lot of the books basic arguments but think I would not be remotely persuaded by this presentation if I wasn’t already inclined to agree.
Obviously just one example, but Schneier has generally been quite skeptical, and he blurbed the book.
This is true, but many of the surprising prepublication reviews are from people who I don’t think were already up-to-date on these AI x-risk arguments (or at least hadn’t given any prior public indication of their awareness, unlike Matt Y).