Agreed on all points. I want to expand on two issues:
The first I think you agree with: it would be unfortunate if someone read this and thought “yeah, the immediate AI harms crowd is either insincere and Machiavellian at our expense, or just stupid. That’s so irritating. I’m gonna go dunk on them”. I think that would make matters worse. It will indirectly increase people saying “x-risk distracts from other AI concerns”. Because a nontrivial factor here is that they’re motivated by and expressing irritation at the x-risk faction (whether that’s justified or not is beside this point). Us getting irritated at them will make them more irritated with us in a vicious cycle, and violá, we’ve got two camps that could be allies, spending their energy undercutting each others’ efforts.
You address that point by saying we shouldn’t be making the inverse silly argument that immediate harms distract from x-risk. I’d expand it to say that we shouldn’t be making any questionable arguments that antagonize other groups. We would probably enhance our odds of survival by actively be making allies, and avoiding making enemies by irritating people unnecessarily.
The second addition is that I think the “x-risk distracts from...” argument is usually a sincere belief. I’m not sure if you’d agree with this or not. The framing here could sound like this is a shrewd and deceptive planned strategy from the immediate harms crowd. It might be occasionally, but I know a number of people who are well-intentioned (and surprisingly well-informed) who really believe that x-risk concerns are silly and talking about them distracts from more pressing concerns. I think they’re totally wrong, but I don’t think they’re bad or idiotic people.
I believe in never attributing to malice that which could be attributed to emotionally motivated confirmation bias in evaluating complex evidence.
Agreed on all points. I want to expand on two issues:
The first I think you agree with: it would be unfortunate if someone read this and thought “yeah, the immediate AI harms crowd is either insincere and Machiavellian at our expense, or just stupid. That’s so irritating. I’m gonna go dunk on them”. I think that would make matters worse. It will indirectly increase people saying “x-risk distracts from other AI concerns”. Because a nontrivial factor here is that they’re motivated by and expressing irritation at the x-risk faction (whether that’s justified or not is beside this point). Us getting irritated at them will make them more irritated with us in a vicious cycle, and violá, we’ve got two camps that could be allies, spending their energy undercutting each others’ efforts.
You address that point by saying we shouldn’t be making the inverse silly argument that immediate harms distract from x-risk. I’d expand it to say that we shouldn’t be making any questionable arguments that antagonize other groups. We would probably enhance our odds of survival by actively be making allies, and avoiding making enemies by irritating people unnecessarily.
The second addition is that I think the “x-risk distracts from...” argument is usually a sincere belief. I’m not sure if you’d agree with this or not. The framing here could sound like this is a shrewd and deceptive planned strategy from the immediate harms crowd. It might be occasionally, but I know a number of people who are well-intentioned (and surprisingly well-informed) who really believe that x-risk concerns are silly and talking about them distracts from more pressing concerns. I think they’re totally wrong, but I don’t think they’re bad or idiotic people.
I believe in never attributing to malice that which could be attributed to emotionally motivated confirmation bias in evaluating complex evidence.