If you can express empathy, show that you do in fact care about the harms they’re worried about as well
Someone can totally do that and express that indeed “harms to minorities” is something we should care about. But OP said that the objection was “the harm AI and tech companies do to minorities and their communities” and… AI is doing no harm that only affects “minorities and their communities”. If anything, current AI is likely to be quite positive. The actually honest answer here is “I care about minorities, but you’re wrong about the interaction between AI and minorities”. And this isn’t going to land super well on leftists IMO.
when I was running the EA club
Also, were the people you were talking to EAs or there because interested in EA in the first place? If that’s the case your positive experience in tackling these topics is very likely not representative of the kind of thing OP is dealing with.
The actually honest answer here is “I care about minorities, but you’re wrong about the interaction between AI and minorities”.
I agree about the facts here, but it strikes me that you might have better results if, rather than immediately telling them they’re wrong, you instead ask, “what exactly are the risks to minorities you’re referring to?” Either they’ll answer the question and give you some concrete examples which you can engage with and show that they aren’t as relevant a concern as AI x-risk, or they’ll flounder and be unable to give any examples, in which case they clearly just don’t have a leg to stand on.
Of course certain social justice types will be inclined to act as if you’re a horrible bigot for not immediately acting as if you know what they’re referring to and agreeing, but those types will be impossible to convince. It would probably make you look better in the eyes of any reasonable third party to the exchange, though, which would be valuable if your goal is to make people think AI x-risk is a credible concern.
Someone can totally do that and express that indeed “harms to minorities” is something we should care about. But OP said that the objection was “the harm AI and tech companies do to minorities and their communities” and… AI is doing no harm that only affects “minorities and their communities”. If anything, current AI is likely to be quite positive. The actually honest answer here is “I care about minorities, but you’re wrong about the interaction between AI and minorities”. And this isn’t going to land super well on leftists IMO.
Also, were the people you were talking to EAs or there because interested in EA in the first place? If that’s the case your positive experience in tackling these topics is very likely not representative of the kind of thing OP is dealing with.
I agree about the facts here, but it strikes me that you might have better results if, rather than immediately telling them they’re wrong, you instead ask, “what exactly are the risks to minorities you’re referring to?” Either they’ll answer the question and give you some concrete examples which you can engage with and show that they aren’t as relevant a concern as AI x-risk, or they’ll flounder and be unable to give any examples, in which case they clearly just don’t have a leg to stand on.
Of course certain social justice types will be inclined to act as if you’re a horrible bigot for not immediately acting as if you know what they’re referring to and agreeing, but those types will be impossible to convince. It would probably make you look better in the eyes of any reasonable third party to the exchange, though, which would be valuable if your goal is to make people think AI x-risk is a credible concern.