Or a second scenario: The AI doesn’t try to create new relationships at all, whether with artificial or natural partners. Instead it just breaks up all relationships, and then wireheads everyone. It calculates that the utility gained from the wireheading is greater than the utility lost in breaking up the relationships. Is this good or bad?
(I would suggest that putting people with artificial partners is sort of like wireheading in that people might not want to be put under such circumstances, even though once they are already in such circumstances they might be happier.)
I would suggest that putting people with artificial partners is sort of like wireheading in that people might not want to be put under such circumstances, even though once they are already in such circumstances they might be happier.
This seems to stretch the notion of wireheading beyond usefulness. Many situations exist where we might endorse options retrospectively that we wouldn’t prospectively, whether through bias, limited information, random changes in perspective, or normal lack of maturity (“eew, girls have cooties!”). Relatively few of them rely on superstimuli or break our goal structure in a strong way.
Or a second scenario: The AI doesn’t try to create new relationships at all, whether with artificial or natural partners. Instead it just breaks up all relationships, and then wireheads everyone. It calculates that the utility gained from the wireheading is greater than the utility lost in breaking up the relationships. Is this good or bad?
(I would suggest that putting people with artificial partners is sort of like wireheading in that people might not want to be put under such circumstances, even though once they are already in such circumstances they might be happier.)
This seems to stretch the notion of wireheading beyond usefulness. Many situations exist where we might endorse options retrospectively that we wouldn’t prospectively, whether through bias, limited information, random changes in perspective, or normal lack of maturity (“eew, girls have cooties!”). Relatively few of them rely on superstimuli or break our goal structure in a strong way.