To check, do you have particular people in mind for this hypothesis? Seems kinda rude to name them here, but could you maybe send me some guesses privately? I currently don’t find this hypothesis as stated very plausible, or like sure maybe, but I think it’s a relatively small fraction of the effect.
Ronny Fernandez
Lighthaven Sequences Reading Group #52 (Tuesday 10/7)
Lighthaven Sequences Reading Group #51 (Tuesday 9/30)
Lighthaven Sequences Reading Group #50 (Tuesday 9/23)
Lighthaven Sequences Reading Group #49 (Tuesday 9/16)
Lighthaven Sequences Reading Group #48.5 (Tuesday 9/9) [ACX meetup edition]
Lighthaven Sequences Reading Group #48 (Tuesday 9/2)
Lighthaven Sequences Reading Group #47 (Tuesday 8/19)
Lighthaven Sequences Reading Group #46 (Tuesday 8/12)
Lighthaven Sequences Reading Group #45 (Tuesday 8/5)
Lighthaven Sequences Reading Group #44 (Tuesday 7/29)
Lighthaven Sequences Reading Group #43 (Tuesday 7/22)
Lighthaven Sequences Reading Group #42 (Tuesday 7/15)
Curated. I have been at times more cautious in communicating my object level views than I now wish I had been. I appreciate this post as a flag for courage: something others might see, and which might counter some of the (according to me) prevailing messages of caution. Those messages, at least in my case, significantly contributed to my caution, and I wish there had been something like this post around for me to read before I had to decide how cautious to be.
The argument this post presents for the conclusion that many people should be braver in communicating about AI x-risk by their own lights, is only moderately convincing. It relies heavily on how the book’s blurbs were sampled, and it seems likely that a lot of optimization went into getting strong snippets rather than a representative sample. I find it hard to update much on this without knowing the details of how the blurbs were collected, even though you address this specific concern. Still, it’s not totally unconvincing.
I’d like to see more empirical research into what kinds of rhetoric work to achieve which aims when communicating with different audiences about AI x-risk. This seems like the sort of thing humanity has already specced into studying, and I’d love to see more writeups applying that existing competence to these questions.
I also would have liked to see more of Nate’s personal story: how he came to hold his current views. My impression is that he didn’t always so confidently believe people should more hold the courage of their convictions when talking about AI x-risk. A record of how his mind changed over time, and what observations/thoughts/events caused that change, could be informative for others in an importantly different way from how this post or empirical work on the question might be. I’d love to see a post from Nate on that in the future.
Is this coming just from the models having geographic data in their training? Much less impressive if so but still cool.