I think there are very good questions in here. Let me try to simplify the logic:
First, the sociological logic: if this is so obviously serious, why is no one else proclaiming it? I think the simple answer is that a) most people haven’t considered it deeply and b) someone has to be first in making a fuss. Kurzweil, Stross, and Vinge (to name a few that have thought about it at least a little) seem to acknowledge a real possibility of AI disaster (they don’t make probability estimates).
Now to the logical argument itself:
a) We are probably at risk from the development of strong AI. b) The SIAI can probably do something about that.
The other points in the OP are not terribly relevant; Eliezer could be wrong about a great many things, but right about these.
This is not a castle in the sky.
Now to argue for each: There’s no good reason to think AGI will NOT happen within the next century. Our brains produce AGI; why not artificial systems? Artificial systems didn’t produce anything a century ago; even without a strong exponential, they’re clearly getting somewhere.
There are lots of arguments for why AGI WILL happen soon; see Kurzweil among others. I personally give it 20-40 years, even allowing for our remarkable cognitive weaknesses.
Next, will it be dangerous? a) Something much smarter than us will do whatever it wants, and very thoroughly. (this doesn’t require godlike AI, just smarter than us. Self-improving helps, too.) b) The vast majority of possible “wants” done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.) Therefore, it will be dangerous if not VERY carefully designed. Humans are notably greedy and bad planners individually, and often worse in groups.
Finally, it seems that SIAI might be able to do something about it. If not, they’ll at least help raise awareness of the issue. And as someone pointed out, achieving FAI would have a nice side effect of preventing most other existential disasters.
While there is a chain of logic, each of the steps seems likely, so multiplying probabilities gives a significant estimate of disaster, justifying some resource expenditure to prevent it (especially if you want to be nice). (Although spending ALL your money or time on it probably isn’t rational, since effort and money generally have sublinear payoffs toward happiness).
Hopefully this lays out the logic; now, which of the above do you NOT think is likely?
I think the point is that not valuing non-interacting copies of oneself might be inconsistent. I suspect it’s true; that consistency requires valuing parallel copies of ourselves just as we value future variants of ourselves and so preserve our lives. Our future selves also can’t “interact” with our current self.