Other reasons that people may have (I have some of these reasons, but not all):
not a classical utilitarian
don’t believe those timelines
too distant to feel an emotional tie to
unclear what to do even if it is a priority
very high discount rate for future humans
belief that moral value is relative with cognitive ability (an extremely smart AI may be worth a few quitillion humans in a moral/experiential sense)
Of these, I think the one that I’m personally least moved by while acknowleging it as one of the better arguments against utilitarianism is the last. It’s clear that there’s SOME difference in moral weight for different experiences of different experiencers. Which means there’s some dimension on which a utility monster is conceivable. If it’s a dimension that AGI will excel on, we can maximize utility by giving it whatever it wants.
Other reasons that people may have (I have some of these reasons, but not all):
not a classical utilitarian
don’t believe those timelines
too distant to feel an emotional tie to
unclear what to do even if it is a priority
very high discount rate for future humans
belief that moral value is relative with cognitive ability (an extremely smart AI may be worth a few quitillion humans in a moral/experiential sense)
Of these, I think the one that I’m personally least moved by while acknowleging it as one of the better arguments against utilitarianism is the last. It’s clear that there’s SOME difference in moral weight for different experiences of different experiencers. Which means there’s some dimension on which a utility monster is conceivable. If it’s a dimension that AGI will excel on, we can maximize utility by giving it whatever it wants.