[Question] What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?

ETA: I’ll be adding things to the list that I think be­long there.

I’m as­sum­ing a high level of cre­dence in clas­sic util­i­tar­i­anism, and that AI-Xrisk is sig­nifi­cant (e.g. roughly >10%), and timelines are not long (e.g. >50% ASI in <100years). ETA: For the pur­pose of this list, I don’t care about ques­tion­ing those as­sump­tions.

Here’s my cur­rent list (off the top of my head):

  • not your com­par­i­tive advantage

  • con­sider other Xrisks more threat­en­ing (top con­tenders: bio /​ nu­clear)

  • in­finite ethics (and maybe other fun­da­men­tal eth­i­cal ques­tions, e.g. to do with moral un­cer­tainty)

  • S-risks

  • simu­la­tion hypothesis

  • ETA: AI has high moral value in ex­pec­ta­tion /​ by default

  • ETA: low tractabil­ity (ei­ther at pre­sent or in gen­eral)

  • ETA: Dooms­day Ar­gu­ment as over­whelming ev­i­dence against fu­tures with large num­ber of minds

Also, does any­one want to say why they think none of these should change the pic­ture? Or point to a good refer­ence dis­cussing this ques­tion? (etc.)