I do think the book is just very high-quality (I read a preview copy) and I would obviously curate it if it was a post, independently of its object-level conclusions.
Would you similarly promote a very high-quality book arguing against AI xrisk by a valued LessWrong member (let’s say titotal)?
I’m fine with the LessWrong team not being neutral about AI xrisk. But I do suspect that this promotion could discourage AI risk sceptics from joining the platform.
Yeah, same as Ben. If Hanson or Scott Alexander wrote something on the topic I disagreed with, but it was similarly well-written, I would be excited to do something similar. Eliezer is of course more core to the site than approximately anyone else, so his authorship weight is heavier, which is part of my thinking on this. I think Bostrom’s Deep Utopia was maybe a bit too niche, but I am not sure, I think pretty plausible I would have done something for that if he had asked.
I’d do it for Hanson, for instance, if it indeed were very high-quality. I expect I’d learn a lot from such a book about economics and futurism and so forth.
Would you similarly promote a very high-quality book arguing against AI xrisk by a valued LessWrong member (let’s say titotal)?
I’m fine with the LessWrong team not being neutral about AI xrisk. But I do suspect that this promotion could discourage AI risk sceptics from joining the platform.
Yeah, same as Ben. If Hanson or Scott Alexander wrote something on the topic I disagreed with, but it was similarly well-written, I would be excited to do something similar. Eliezer is of course more core to the site than approximately anyone else, so his authorship weight is heavier, which is part of my thinking on this. I think Bostrom’s Deep Utopia was maybe a bit too niche, but I am not sure, I think pretty plausible I would have done something for that if he had asked.
I’d do it for Hanson, for instance, if it indeed were very high-quality. I expect I’d learn a lot from such a book about economics and futurism and so forth.