a superintelligence will be at least several orders of magnitude more persuasive than character.ai or Stuart Armstrong.
Believing this seems central to believing high P(doom).
But, I think it’s not a coherent enough concept to justify believing it. Yes, some people are far more persuasive than others. But how can you extrapolate that far beyond the distribution we obverse in humans? I do think AI will prove to better than humans at this, and likely much better.
But “much” better isn’t the same as “better enough to be effectively treated as magic”.
Great post, and timely, for me personally. I found myself having similar thoughts recently, and this was a large part of why I recently decided to start engaging with the community more (so apologies for coming on strong in my first comment, while likely lacking good norms).
Some questions I’m trying to answer, and this post certainly helps a bit:
Is there general consensus on the “goals” of the rationalist community? I feel like there implicitly is something like “learn and practice rationality as a human” and “debate and engage well to co-develop valuable ideas”.
Would a goal more like “helping raise the overall sanity waterline” ultimately be a more useful, and successful “purpose” for this community? I potentially think so. Among other reasons, as bc4026bd4aaa5b7fe points out, there are a number of forces that trend this community towards being insular, and an explicit goal against that tendency would be useful.