I think that (metaphorically) there should be an all-caps disclaimer that reads something like “TO BE CLEAR AI IS STILL ON TRACK TO KILL EVERYONE YOU LOVE; YOU SHOULD BE ALARMED ABOUT THIS AND TELLING PEOPLE IN NO UNCERTAIN TERMS THAT YOU HAVE FAR, FAR MORE IN COMMON WITH YUDKOWSKY AND SOARES THAN YOU DO WITH THE LOBBYISTS OF META, WHO ABSENT COORDINATION BY PEOPLE ON HUMANITY’S SIDE ARE LIABLE TO WIN THIS FIGHT, SO COORDINATE WE MUST” every couple of paragraphs.
Yeah, I kind of regret not prefacing my pseudo-review with something like this. I was generally writing it from the mindset of “obviously the book is entirely correct and I’m only reviewing the presentation”, and my assumption was that trying to “sell it” to LW users was preaching to the choir (I would’ve strongly endorsed it if I had a big mainstream audience, or even if I were making a top-level LW post). But that does feel like part of the our-kind-can’t-cooperate pattern now.
Yeah, I kind of regret not prefacing my pseudo-review with something like this. I was generally writing it from the mindset of “obviously the book is entirely correct and I’m only reviewing the presentation”, and my assumption was that trying to “sell it” to LW users was preaching to the choir (I would’ve strongly endorsed it if I had a big mainstream audience, or even if I were making a top-level LW post). But that does feel like part of the our-kind-can’t-cooperate pattern now.