The actually honest answer here is “I care about minorities, but you’re wrong about the interaction between AI and minorities”.
I agree about the facts here, but it strikes me that you might have better results if, rather than immediately telling them they’re wrong, you instead ask, “what exactly are the risks to minorities you’re referring to?” Either they’ll answer the question and give you some concrete examples which you can engage with and show that they aren’t as relevant a concern as AI x-risk, or they’ll flounder and be unable to give any examples, in which case they clearly just don’t have a leg to stand on.
Of course certain social justice types will be inclined to act as if you’re a horrible bigot for not immediately acting as if you know what they’re referring to and agreeing, but those types will be impossible to convince. It would probably make you look better in the eyes of any reasonable third party to the exchange, though, which would be valuable if your goal is to make people think AI x-risk is a credible concern.
I agree about the facts here, but it strikes me that you might have better results if, rather than immediately telling them they’re wrong, you instead ask, “what exactly are the risks to minorities you’re referring to?” Either they’ll answer the question and give you some concrete examples which you can engage with and show that they aren’t as relevant a concern as AI x-risk, or they’ll flounder and be unable to give any examples, in which case they clearly just don’t have a leg to stand on.
Of course certain social justice types will be inclined to act as if you’re a horrible bigot for not immediately acting as if you know what they’re referring to and agreeing, but those types will be impossible to convince. It would probably make you look better in the eyes of any reasonable third party to the exchange, though, which would be valuable if your goal is to make people think AI x-risk is a credible concern.