My read is that the cooperation he is against is with the narrative that AI-risk is not that important (because it’s too far away or weird or whatever). This indeed influences which sorts of agencies get funded, which is a key thing he is upset about here.
On the other hand, engaging with the arguments is cooperation at shared epistemics, which I’m sure he’s happy to coordinate with. Also, I think that if he thought that the arguments in question were coming from a genuine epistemic disagreement (and not motivated cognition of some form), he would (correctly) be less derisive. There is much more to be gained (in expectation) from engaging with an intellectually honest opponent than one with a bottom line.
Yeah, those are good points… I think there is a conflict with the overall structure I’m describing, but I’m not modeling the details well apparently.
Thank you!