This is a valid line of critique but seems moderately undercut by its prepublication endorsements, which suggest that the arguments landed pretty ok. Maybe they will land less well on the rest of the book’s target audience?
(re: Said & MIRI housecleaning: Lightcone and MIRI are separate organizations and MIRI does not moderate LessWrong. You might try to theorize that Habryka, the person who made the call to ban Said back in July, was attempting to do some 4d-chess PR optimization on MIRI’s behalf months ahead of time, but no, he was really nearly banned multiple times over the years and he was finally banned this time because Habryka changed his mind after the most recent dust-up. Said practically never commented on AI-related subjects, so it’s not even clear what the “upside” would’ve been. From my perspective this type of thinking resembles the constant noise on e.g. HackerNews about how [tech company x] is obviously doing [horrible thing y] behind-the-scenes, which often aren’t even in the company’s interests, and generally rely on assumptions that turn out to be false.)
My honest impression, though I could be wrong and didn’t analyze the prepublication reviews in detail, is that there is very much demand for this book in the sense that there’s a lot of people who are worried about AI for agent foundations shaped reasons and want an introduction they can give to their friends and family who don’t care that much.
For example I think this review from Matt Yglesias makes the point fairly explicit? He obviously has a preexisting interest in this subject and is endorsing the book because he wants the subject to get more attention, that doesn’t necessarily mean that the book is good. I in fact agree with a lot of the books basic arguments but think I would not be remotely persuaded by this presentation if I wasn’t already inclined to agree.
there is very much demand for this book in the sense that there’s a lot of people who are worried about AI for agent foundations shaped reasons and want an introduction they can give to their friends and family who don’t care that much.
This is true, but many of the surprising prepublication reviews are from people who I don’t think were already up-to-date on these AI x-risk arguments (or at least hadn’t given any prior public indication of their awareness, unlike Matt Y).
This is a valid line of critique but seems moderately undercut by its prepublication endorsements, which suggest that the arguments landed pretty ok. Maybe they will land less well on the rest of the book’s target audience?
(re: Said & MIRI housecleaning: Lightcone and MIRI are separate organizations and MIRI does not moderate LessWrong. You might try to theorize that Habryka, the person who made the call to ban Said back in July, was attempting to do some 4d-chess PR optimization on MIRI’s behalf months ahead of time, but no, he was really nearly banned multiple times over the years and he was finally banned this time because Habryka changed his mind after the most recent dust-up. Said practically never commented on AI-related subjects, so it’s not even clear what the “upside” would’ve been. From my perspective this type of thinking resembles the constant noise on e.g. HackerNews about how [tech company x] is obviously doing [horrible thing y] behind-the-scenes, which often aren’t even in the company’s interests, and generally rely on assumptions that turn out to be false.)
My honest impression, though I could be wrong and didn’t analyze the prepublication reviews in detail, is that there is very much demand for this book in the sense that there’s a lot of people who are worried about AI for agent foundations shaped reasons and want an introduction they can give to their friends and family who don’t care that much.
https://x.com/mattyglesias/status/1967765768948306275?s=46
For example I think this review from Matt Yglesias makes the point fairly explicit? He obviously has a preexisting interest in this subject and is endorsing the book because he wants the subject to get more attention, that doesn’t necessarily mean that the book is good. I in fact agree with a lot of the books basic arguments but think I would not be remotely persuaded by this presentation if I wasn’t already inclined to agree.
Obviously just one example, but Schneier has generally been quite skeptical, and he blurbed the book.
This is true, but many of the surprising prepublication reviews are from people who I don’t think were already up-to-date on these AI x-risk arguments (or at least hadn’t given any prior public indication of their awareness, unlike Matt Y).