My read is that the cooperation he is against is with the narrative that AI-risk is not that important (because it’s too far away or weird or whatever). This indeed influences which sorts of agencies get funded, which is a key thing he is upset about here.
On the other hand, engaging with the arguments is cooperation at shared epistemics, which I’m sure he’s happy to coordinate with. Also, I think that if he thought that the arguments in question were coming from a genuine epistemic disagreement (and not motivated cognition of some form), he would (correctly) be less derisive. There is much more to be gained (in expectation) from engaging with an intellectually honest opponent than one with a bottom line.
My read is that the cooperation he is against is with the narrative that AI-risk is not that important (because it’s too far away or whatever). This indeed influences which sorts of agencies get funded, which is a key thing he is upset about here.
Hm, I still don’t really understand what it means to be [against cooperation with the narrative that AI risk is not that important]. Beyond just believing that AI risk is important and acting accordingly. (A position that seems easy to state explicitly.)
Also: The people whose work is being derided definitely don’t agree with the narrative that “AI risk is not that important”. (They are and were working full-time to reduce AI risk because they think it’s extremely important.) If the derisiveness is being read as a signal that “AI risk is important” is a point of contention, then the derisiveness is misinforming people. Or if the derisiveness was supposed to communicate especially strong disapproval of any (mistaken) views that would directionally suggest that AI risk is less important than the author thinks: then that would just seems like soldier mindset (more harshly critizing views that push in directions you don’t like, holding goodness-of-the-argument constant), which seems much more likely to muddy the epistemic waters than to send important signals.
My read is that the cooperation he is against is with the narrative that AI-risk is not that important (because it’s too far away or weird or whatever). This indeed influences which sorts of agencies get funded, which is a key thing he is upset about here.
On the other hand, engaging with the arguments is cooperation at shared epistemics, which I’m sure he’s happy to coordinate with. Also, I think that if he thought that the arguments in question were coming from a genuine epistemic disagreement (and not motivated cognition of some form), he would (correctly) be less derisive. There is much more to be gained (in expectation) from engaging with an intellectually honest opponent than one with a bottom line.
Hm, I still don’t really understand what it means to be [against cooperation with the narrative that AI risk is not that important]. Beyond just believing that AI risk is important and acting accordingly. (A position that seems easy to state explicitly.)
Also: The people whose work is being derided definitely don’t agree with the narrative that “AI risk is not that important”. (They are and were working full-time to reduce AI risk because they think it’s extremely important.) If the derisiveness is being read as a signal that “AI risk is important” is a point of contention, then the derisiveness is misinforming people. Or if the derisiveness was supposed to communicate especially strong disapproval of any (mistaken) views that would directionally suggest that AI risk is less important than the author thinks: then that would just seems like soldier mindset (more harshly critizing views that push in directions you don’t like, holding goodness-of-the-argument constant), which seems much more likely to muddy the epistemic waters than to send important signals.
Yeah, those are good points… I think there is a conflict with the overall structure I’m describing, but I’m not modeling the details well apparently.
Thank you!