I used to have this specific discussion (x-risk vs. near-term social justice) a lot when I was running the EA club at the Claremont colleges and I had great success with it; I really don’t think it’s that hard of a conversation to have, at least no harder than bridging any other ideological divide. If you can express empathy, show that you do in fact care about the harms they’re worried about as well,[1] but then talk about how you think about scope sensitivity and cause prioritization, I’ve found that a lot of people are more receptive than you might initially give them credit for.
Assuming you do—I think it is an important prerequisite that you do actually yourself care about social justice issues too. And I think you should care about social justice issues to some extent; they are real issues! If you feel like they aren’t real issues, I’d probably recommend reading more history; I think sometimes people don’t understand the degree to which we are still e.g. living with the legacy of Jim Crow. But of course I think you should care about social justice much less than you should care about x-risk.
If you can express empathy, show that you do in fact care about the harms they’re worried about as well
Someone can totally do that and express that indeed “harms to minorities” is something we should care about. But OP said that the objection was “the harm AI and tech companies do to minorities and their communities” and… AI is doing no harm that only affects “minorities and their communities”. If anything, current AI is likely to be quite positive. The actually honest answer here is “I care about minorities, but you’re wrong about the interaction between AI and minorities”. And this isn’t going to land super well on leftists IMO.
when I was running the EA club
Also, were the people you were talking to EAs or there because interested in EA in the first place? If that’s the case your positive experience in tackling these topics is very likely not representative of the kind of thing OP is dealing with.
The actually honest answer here is “I care about minorities, but you’re wrong about the interaction between AI and minorities”.
I agree about the facts here, but it strikes me that you might have better results if, rather than immediately telling them they’re wrong, you instead ask, “what exactly are the risks to minorities you’re referring to?” Either they’ll answer the question and give you some concrete examples which you can engage with and show that they aren’t as relevant a concern as AI x-risk, or they’ll flounder and be unable to give any examples, in which case they clearly just don’t have a leg to stand on.
Of course certain social justice types will be inclined to act as if you’re a horrible bigot for not immediately acting as if you know what they’re referring to and agreeing, but those types will be impossible to convince. It would probably make you look better in the eyes of any reasonable third party to the exchange, though, which would be valuable if your goal is to make people think AI x-risk is a credible concern.
I used to have this specific discussion (x-risk vs. near-term social justice) a lot when I was running the EA club at the Claremont colleges and I had great success with it; I really don’t think it’s that hard of a conversation to have, at least no harder than bridging any other ideological divide. If you can express empathy, show that you do in fact care about the harms they’re worried about as well,[1] but then talk about how you think about scope sensitivity and cause prioritization, I’ve found that a lot of people are more receptive than you might initially give them credit for.
Assuming you do—I think it is an important prerequisite that you do actually yourself care about social justice issues too. And I think you should care about social justice issues to some extent; they are real issues! If you feel like they aren’t real issues, I’d probably recommend reading more history; I think sometimes people don’t understand the degree to which we are still e.g. living with the legacy of Jim Crow. But of course I think you should care about social justice much less than you should care about x-risk.
Someone can totally do that and express that indeed “harms to minorities” is something we should care about. But OP said that the objection was “the harm AI and tech companies do to minorities and their communities” and… AI is doing no harm that only affects “minorities and their communities”. If anything, current AI is likely to be quite positive. The actually honest answer here is “I care about minorities, but you’re wrong about the interaction between AI and minorities”. And this isn’t going to land super well on leftists IMO.
Also, were the people you were talking to EAs or there because interested in EA in the first place? If that’s the case your positive experience in tackling these topics is very likely not representative of the kind of thing OP is dealing with.
I agree about the facts here, but it strikes me that you might have better results if, rather than immediately telling them they’re wrong, you instead ask, “what exactly are the risks to minorities you’re referring to?” Either they’ll answer the question and give you some concrete examples which you can engage with and show that they aren’t as relevant a concern as AI x-risk, or they’ll flounder and be unable to give any examples, in which case they clearly just don’t have a leg to stand on.
Of course certain social justice types will be inclined to act as if you’re a horrible bigot for not immediately acting as if you know what they’re referring to and agreeing, but those types will be impossible to convince. It would probably make you look better in the eyes of any reasonable third party to the exchange, though, which would be valuable if your goal is to make people think AI x-risk is a credible concern.