You seem to be viewing a Friendly-to-them singularity as freezing in place the couple’s utility functions. I agree that it might be able to stabilize it against currently-known changes, such as those you cite, fading limerance, human pair bond stability, some others. I’m skeptical about stability with respect to all important changes over a million years. Even a superintelligence is going to encounter surprises, whether from exploration of the boundaries of design spaces or exploration of physical space. Even for it, the future is uncertain—and the balancing of subgoals and values must likewise have some uncertainty. If the consequences of one of those surprises makes one or both of the members of a couple morph into something rather different, is sticking with the original bond sensible, or even meaningful? If the couple precludes all such changes, is that at all a reasonable choice over such a long time period? Is it even viable? Precluding change in the face of surprise is a dangerous choice.
You seem to be viewing a Friendly-to-them singularity as freezing in place the couple’s utility functions. I agree that it might be able to stabilize it against currently-known changes, such as those you cite, fading limerance, human pair bond stability, some others. I’m skeptical about stability with respect to all important changes over a million years. Even a superintelligence is going to encounter surprises, whether from exploration of the boundaries of design spaces or exploration of physical space. Even for it, the future is uncertain—and the balancing of subgoals and values must likewise have some uncertainty. If the consequences of one of those surprises makes one or both of the members of a couple morph into something rather different, is sticking with the original bond sensible, or even meaningful? If the couple precludes all such changes, is that at all a reasonable choice over such a long time period? Is it even viable? Precluding change in the face of surprise is a dangerous choice.