Without rejecting any of the premises in your question I can come up with:
Low tractability: you assign almost all of the probability mass to one or both of “alignment will be easily solved” and “alignment is basically impossible”
Currently low tractability: If your timeline is closer to 100 years than 10, it is possible that the best use of resources for AI risk is “sit on them until the field developers further” in the same sense that someone in the 1990s wanting good facial recognition might have been best served by waiting for modern ML.
Refusing to prioritize highly uncertain causes in order to avoid the Winner’s Curse outcome of your highest priority ending up as something with low true value and high noise
Flavours of utilitarianism that don’t value the unborn and would not see it as an enormous tragedy if we failed to create trillions of happy post-Singularity people (depending on the details human extinction might not even be negative, so long as the deaths aren’t painful)