Note to everyone else: the least you can do is share this post until everyone you know is sick of it.
I would feel averse to this post being shared outside LW circles much, given its claims about AGI in the near future being plausible. I agree with the claim but not really for the reasons provided in the post; I think it’s reasonable to put some (say 10-20%) probability on AGI in the next couple of decades due to the possibility of unexpectedly fast progress and the fact that we don’t actually know what would be needed for AGI. But that isn’t really spelled out in the post, and the general impression one gets from the post is that “recent machine learning advances suggest that AGI will be here within a few decades with high probability”.
This is a pretty radical claim which many relevant experts would disagree with, but which is not really supported or argued for in the post. I would expect that many experts who saw this post would lower their credence in AI risk as a result, as they would see a view they strongly disagreed with, didn’t see any supporting arguments they’d consider credible, and end up thinking that Raemon (and by extension AI risk people) didn’t know what they were talking about.
I do mostly agree with not sharing this as a public-facing document. This post is designed to be read after you’ve read the sequences and/or Superintelligence and are already mostly on board.