I would suggest that putting people with artificial partners is sort of like wireheading in that people might not want to be put under such circumstances, even though once they are already in such circumstances they might be happier.
This seems to stretch the notion of wireheading beyond usefulness. Many situations exist where we might endorse options retrospectively that we wouldn’t prospectively, whether through bias, limited information, random changes in perspective, or normal lack of maturity (“eew, girls have cooties!”). Relatively few of them rely on superstimuli or break our goal structure in a strong way.
This seems to stretch the notion of wireheading beyond usefulness. Many situations exist where we might endorse options retrospectively that we wouldn’t prospectively, whether through bias, limited information, random changes in perspective, or normal lack of maturity (“eew, girls have cooties!”). Relatively few of them rely on superstimuli or break our goal structure in a strong way.