against “AI risk”

Why does SI/​LW fo­cus so much on AI-FOOM dis­aster, with ap­par­ently much less con­cern for things like

  • bio/​nano-tech disaster

  • Malthu­sian up­load scenario

  • highly de­struc­tive war

  • bad memes/​philoso­phies spread­ing among hu­mans or posthu­mans and over­rid­ing our values

  • up­load sin­gle­ton os­sify­ing into a sub­op­ti­mal form com­pared to the kind of su­per­in­tel­li­gence that our uni­verse could support

Why, for ex­am­ple, is luke­prog’s strat­egy se­quence ti­tled “AI Risk and Op­por­tu­nity”, in­stead of “The Sin­gu­lar­ity, Risks and Op­por­tu­ni­ties”? Doesn’t it seem strange to as­sume that both the risks and op­por­tu­ni­ties must be AI re­lated, be­fore the anal­y­sis even be­gins? Given our cur­rent state of knowl­edge, I don’t see how we can make such con­clu­sions with any con­fi­dence even af­ter a thor­ough anal­y­sis.

SI/​LW some­times gives the im­pres­sion of be­ing a dooms­day cult, and it would help if we didn’t con­cen­trate so much on a par­tic­u­lar dooms­day sce­nario. (Are there any dooms­day cults that say “doom is prob­a­bly com­ing, we’re not sure how but here are some likely pos­si­bil­ities”?)