[Question] What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?

ETA: I’ll be adding things to the list that I think belong there.

I’m assuming a high level of credence in classic utilitarianism, and that AI-Xrisk is significant (e.g. roughly >10%), and timelines are not long (e.g. >50% ASI in <100years). ETA: For the purpose of this list, I don’t care about questioning those assumptions.

Here’s my current list (off the top of my head):

  • not your comparitive advantage

  • consider other Xrisks more threatening (top contenders: bio /​ nuclear)

  • infinite ethics (and maybe other fundamental ethical questions, e.g. to do with moral uncertainty)

  • S-risks

  • simulation hypothesis

  • ETA: AI has high moral value in expectation /​ by default

  • ETA: low tractability (either at present or in general)

  • ETA: Doomsday Argument as overwhelming evidence against futures with large number of minds

Also, does anyone want to say why they think none of these should change the picture? Or point to a good reference discussing this question? (etc.)