Utilitarianism implies that if we build an AI that successfully maximizes utility/value, we should be ok with it replacing us.
That is true for some forms of utilitarianism (total or average utilitarianism) but not necessarily for some others (person affecting preference utilitarianism). For the reason you outline, I think the latter kind of utilitarianism is correct. Its notion of utility/value incorporates what currently existing people want, and I believe that most humans want that humanity will not be replaced by AI in the future.
One way to think about it: AI succesionism is bad for a similar reason we regard being dead as bad: We don’t want it to happen. We don’t want to die, and we also don’t want humanity to become extinct through sterilization or other means. Perhaps we have these preferences for biological reasons, but it doesn’t matter why: final goals need no justification, they just exist and have moral weight in virtue of existing. The fact that we wouldn’t be around if it (being dead, or humanity being extinct) happens, and that we therefore would not suffer from it, doesn’t invalidate the current weight of our current preferences.
I agree with this. Just one quibble here:
That is true for some forms of utilitarianism (total or average utilitarianism) but not necessarily for some others (person affecting preference utilitarianism). For the reason you outline, I think the latter kind of utilitarianism is correct. Its notion of utility/value incorporates what currently existing people want, and I believe that most humans want that humanity will not be replaced by AI in the future.
One way to think about it: AI succesionism is bad for a similar reason we regard being dead as bad: We don’t want it to happen. We don’t want to die, and we also don’t want humanity to become extinct through sterilization or other means. Perhaps we have these preferences for biological reasons, but it doesn’t matter why: final goals need no justification, they just exist and have moral weight in virtue of existing. The fact that we wouldn’t be around if it (being dead, or humanity being extinct) happens, and that we therefore would not suffer from it, doesn’t invalidate the current weight of our current preferences.