Re: specific claims to falsify, I generally buy the argument.
If I had to pick out specific aspects which seem weaker, I think they would mostly be related to our confusion around agent foundations. It isn’t trivially obvious to me that the way we describe “intelligence” or “goals” within the instrumental convergence argument is a good match for the way current systems operate (though it seems close enough, and we shouldn’t expect to be wrong in a way that makes the situation better).
I would agree that instrumental convergence is probably not a necessary component of AI x-risk, so you’re correct that “crux” is a bit of a misnomer.
However, in my experience it is one of the primary arguments people rely on when explaining their concerns to others. The correlation between credence in instrumental convergence and AI x-risk concern seems very high. IMO it is also one of the most concerning legs of the overall argument.
If somebody made a compelling case that we should not expect instrumental convergence by default in the current ML paradigm, I think the overall argument for x-risk would have to look fairly different from the one that is usually put forward.