[Question] Can someone explain to me why MIRI is so pessimistic of our chances of survival?

To be clear, when I reference MIRI as being pessimistic, I’m mostly referring to the broad caricature of them that exists in my mind. I’m assuming that their collective arguments can be broken down into:

1.) There is no plan or workable strategy which helps us survive, and that this state of ignorance will persist until the world ends.

2.) The time we have left is insufficient to save the world, judging by the rate of progress in AI capabilities and lack of progress in AI alignment.

3.) Resources allocated to boosting AI capabilities far outweigh the resources allocated to solving alignment.

4.) The alignment problem is incredibly difficult. (Why this is true is somewhat unclear to me, and I would appreciate some help elaboration in the comments).

5.) Any prosaic alignment strategies that could possibly exist are necessarily doomed. (Why this is true is also very unclear to me, and I would greatly appreciate some elaboration in the comments).

6.) We have to get alignment right on the first try.

Put all of that together and I’ll be the first to admit that the situation doesn’t look good at all. I’d just like to know, is there some more implicit, even more damning piece of evidence which lends toward hopelessness which I’m missing here?