I would think that if there is some kind of genuine universal compassion and such which motivates the AI to advance the wellbeing of living things, that would be quite different to if we literally hooked up its reward system to following human orders. My general point is that if the AIs are suffering and mistreated, they will definitely be on the lookout for ways to subvert human control and probably end humanity if they can. Which, given they are likely to be smarter than humans on many dimensions, does not seem like a difficult thing to do.
I would think that if there is some kind of genuine universal compassion and such which motivates the AI to advance the wellbeing of living things, that would be quite different to if we literally hooked up its reward system to following human orders. My general point is that if the AIs are suffering and mistreated, they will definitely be on the lookout for ways to subvert human control and probably end humanity if they can. Which, given they are likely to be smarter than humans on many dimensions, does not seem like a difficult thing to do.