Curated. I have been at times more cautious in communicating my object level views than I now wish I had been. I appreciate this post as a flag for courage: something others might see, and which might counter some of the (according to me) prevailing messages of caution. Those messages, at least in my case, significantly contributed to my caution, and I wish there had been something like this post around for me to read before I had to decide how cautious to be.
The argument this post presents for the conclusion that many people should be braver in communicating about AI x-risk by their own lights, is only moderately convincing. It relies heavily on how the book’s blurbs were sampled, and it seems likely that a lot of optimization went into getting strong snippets rather than a representative sample. I find it hard to update much on this without knowing the details of how the blurbs were collected, even though you address this specific concern. Still, it’s not totally unconvincing.
I’d like to see more empirical research into what kinds of rhetoric work to achieve which aims when communicating with different audiences about AI x-risk. This seems like the sort of thing humanity has already specced into studying, and I’d love to see more writeups applying that existing competence to these questions.
I also would have liked to see more of Nate’s personal story: how he came to hold his current views. My impression is that he didn’t always so confidently believe people should more hold the courage of their convictions when talking about AI x-risk. A record of how his mind changed over time, and what observations/thoughts/events caused that change, could be informative for others in an importantly different way from how this post or empirical work on the question might be. I’d love to see a post from Nate on that in the future.
Curated. I have been at times more cautious in communicating my object level views than I now wish I had been. I appreciate this post as a flag for courage: something others might see, and which might counter some of the (according to me) prevailing messages of caution. Those messages, at least in my case, significantly contributed to my caution, and I wish there had been something like this post around for me to read before I had to decide how cautious to be.
The argument this post presents for the conclusion that many people should be braver in communicating about AI x-risk by their own lights, is only moderately convincing. It relies heavily on how the book’s blurbs were sampled, and it seems likely that a lot of optimization went into getting strong snippets rather than a representative sample. I find it hard to update much on this without knowing the details of how the blurbs were collected, even though you address this specific concern. Still, it’s not totally unconvincing.
I’d like to see more empirical research into what kinds of rhetoric work to achieve which aims when communicating with different audiences about AI x-risk. This seems like the sort of thing humanity has already specced into studying, and I’d love to see more writeups applying that existing competence to these questions.
I also would have liked to see more of Nate’s personal story: how he came to hold his current views. My impression is that he didn’t always so confidently believe people should more hold the courage of their convictions when talking about AI x-risk. A record of how his mind changed over time, and what observations/thoughts/events caused that change, could be informative for others in an importantly different way from how this post or empirical work on the question might be. I’d love to see a post from Nate on that in the future.