Thank you for the questions, and my apologies for the delayed response.
I’m curious about to what extent these intutions are symmetric. Say that the group of like-minded and mutually friendly extreme masochists existed first, and wanted to create their mutually preferred, mutually satisfying sadist. Do you still have a problem with that?
Yes, with the admission that there are specific attributes to masochism and sadism that are common but not universal to all possible relationships or even all sexual relationships with heavy differences in power dynamics(1). It’s less negative in the immediate term, because one hundred and fifty masochists making a single sadist results in a maximum around forty million created beings instead of one trillion. In the long term, the equilibrium ends up pretty identical.
(1) For contrast, the structures in wanting to perform menial labor without recompense are different from those wanting other people to perform labor for you, even before you get to a post-scarcity society. Likewise, there are difference in how prostitution fantasies generally work versus how fantasies about hiring prostitutes do.
Or you can be a guardian, and enjoy teaching and protecting people, and find yourself creating people that are weak and in need of guidance.
The above sounds like a description of a “good parent”, as commonly understood!
I’m not predisposed toward child-raising, but from my understanding the point of “good parent” does not value making someone weak: it values making someone strong. It’s the limitations of the tools that have forced us to deal with years of not being able to stand upright. Parents are generally judged negatively if their offspring are not able to operate our their own by certain points.
To be consistent with this, do you think that parenting of babies as it currently exist is problematic and creepy, and should be banned once we have the capability to create grown-ups from scratch?
If it were possible to simulate or otherwise avoid the joys of the terrible twos, I’d probably consider it more ethical. I don’t know that I have the tools to properly evaluate the loss in values between the two actions, though. Once you’ve got eternity or even a couple reliable centuries, the damages of ten or twenty years bother me a lot less.
These sort of created beings aren’t likely to be in that sort of ten or twenty year timeframe, though. At least according to the Caelum est Conterrens fic, the vast majority of immortals (artificial or uploaded) stay within a fairly limited set of experiences and values based on their initial valueset. You’re not talking about someone being weak for a year or a decade or even a century: they’ll be powerless forever.
I haven’t thought on it enough to say that creating such beings should be banned (although my gut reaction favors doing so), but I do know it’d strike me as very creepy. If it were possible to significantly reduce or eliminate the number of negative development experiences entities undergo, I’d probably encourage it.
If David had wanted a symmetrically fulfilled partner slightly more intelligent than him, someone he could always learn from, I get the feeling you wouldn’t find it as creepy. (Correct me if that’s not so). But the situation is symmetrical. Why is it important who came first?
In that particular case, the equilibrium is less bounded. Butterscotch isn’t able to become better than David or even to desire becoming better than David, and a number of pathways for David’s desire to learn or teach can collapse such that Butterscotch would not be able to become better or desire becoming better than herself.
That’s not really the case the other way around. Someone who wants a mentor that knows more than them has to have an unbounded future in the FiOverse, both for themselves and their mentor.
In the case of intelligence, that’s not that bad. Real-world people tend toward a bounded curve on that, and there are reasons we prefer socializing within a relatively narrow bound downward. Other closed equilibria are more unpleasant. I don’t have the right to say that Lars’ fate is wrong—it at least gets close to the catgirl volcano threshold—but it’s shallow enough to be concerning. This sort of thing isn’t quite wireheading, but it’s close enough to be hard to tell the precise difference.
More generally, some people—quite probably all people—are going to go into the future with hangups. Barring some really massive improvements in philosophy, we may not even know the exacts of those hangups. I’m really hesitant to have a Machine Overlord start zapping neurons to improve things without the permission of the owner’s brains (yes, even recognizing that a sufficiently powerful AI will get the permission it wants).
As a result, that’s going to privilege the values of already-extant entities in ways that I won’t privilege creating new ones: some actions don’t translate through time because of this. I’m hesitant to change David’s (or, once already created, Butterscotch’s) brain against the owner’s will, but since we’re already making Butterscotch’s mind from scratch both the responsibilities and the ethical questions are different.
Me finding some versions creepier than others reflects my personal values, and at least some of those personal values reflect structures that won’t exist in the FiOverse. It’s not as harmful when David talks down to Butterscotch, because she really hasn’t achieved everything he has (and the simulation even gives him easy tools to make sure he’s only teaching her subjects she hasn’t achieved yet), where part of why I find it creepy is because a lot of real-world people assume other folk are less knowledgeable than themselves without good evidence. Self-destructive cycles probably don’t happen under CelestAI’s watch. Lars and his groupies don’t have to worry about unwanted pregnancy, or alcoholism, or anything like that, and at least some of my discomfort comes from those sort of things.
At the same time, I don’t know that I want a universe that doesn’t at least occasionally tempt up beyond or within our comfort zones.
Sorry, I’m not following your first point. The relevant “specific attribute” that sadism and masochism seem to have in this context are that they specifically squick User:gattsuru. If you’re trying to claim something else is objectively bad about them, you’ve not communicated.
I’m not predisposed toward child-raising, but from my understanding the point of “good parent” does not value making someone weak: it values making someone strong.
Yes, and my comparison stands; you specified a person who valued teaching and protecting people, not someone who valued having the experience of teaching and protecting people. Someone with the former desires isn’t going to be happy if the people they’re teaching don’t get stronger.
You seem to be envisaging some maximally perverse hybrid of preference-satisfaction and wireheading, where I don’t actually value really truly teaching someone, but instead of cheaply feeding me delusions, someone’s making actual minds for me to fail to teach!
the vast majority of immortals (artificial or uploaded) stay within a fairly limited set of experiences and values based on their initial valueset.
We are definitely working from very different assumptions here. “stay within a fairly limited set of experiences and values based on their initial valueset” describes, well, anything recognisable as a person. The alternative to that is not a magical being of perfect freedom; it’s being the dude from Permutation City randomly preferring to carve table legs for a century.
In that particular case, the equilibrium is less bounded. Butterscotch isn’t able to become better than David or even to desire becoming better than David, and a number of pathways for David’s desire to learn or teach can collapse such that Butterscotch would not be able to become better or desire becoming better than herself.
I don’t think that’s what we’re given in the story, though. If Butterscotch is made such that she desires self-improvement, then we know that David’s desires cannot in fact collapse in such a way, because otherwise she would have been made differently.
Agreed that it’s a problem if the creator is less omniscient, though.
That’s not really the case the other way around. Someone who wants a mentor that knows more than them has to have an unbounded future in the FiOverse, both for themselves and their mentor.
Butterscotch is that person. That is my point about symmetry.
I don’t have the right to say that Lars’ fate is wrong—it at least gets close to the catgirl volcano threshold—but it’s shallow enough to be concerning. This sort of thing isn’t quite wireheading, but it’s close enough to be hard to tell the precise difference.
But then—what do you want to happen? Presumably you think it is possible for a Lars to actually exist. But from elsewhere in your comment, you don’t want an outside optimiser to step in and make them less “shallow”, and you seem dubious about even the ability to give consent. Would you deem it more authentic to simulate angst und bange unto the end of time?
Thank you for the questions, and my apologies for the delayed response.
Yes, with the admission that there are specific attributes to masochism and sadism that are common but not universal to all possible relationships or even all sexual relationships with heavy differences in power dynamics(1). It’s less negative in the immediate term, because one hundred and fifty masochists making a single sadist results in a maximum around forty million created beings instead of one trillion. In the long term, the equilibrium ends up pretty identical.
(1) For contrast, the structures in wanting to perform menial labor without recompense are different from those wanting other people to perform labor for you, even before you get to a post-scarcity society. Likewise, there are difference in how prostitution fantasies generally work versus how fantasies about hiring prostitutes do.
I’m not predisposed toward child-raising, but from my understanding the point of “good parent” does not value making someone weak: it values making someone strong. It’s the limitations of the tools that have forced us to deal with years of not being able to stand upright. Parents are generally judged negatively if their offspring are not able to operate our their own by certain points.
If it were possible to simulate or otherwise avoid the joys of the terrible twos, I’d probably consider it more ethical. I don’t know that I have the tools to properly evaluate the loss in values between the two actions, though. Once you’ve got eternity or even a couple reliable centuries, the damages of ten or twenty years bother me a lot less.
These sort of created beings aren’t likely to be in that sort of ten or twenty year timeframe, though. At least according to the Caelum est Conterrens fic, the vast majority of immortals (artificial or uploaded) stay within a fairly limited set of experiences and values based on their initial valueset. You’re not talking about someone being weak for a year or a decade or even a century: they’ll be powerless forever.
I haven’t thought on it enough to say that creating such beings should be banned (although my gut reaction favors doing so), but I do know it’d strike me as very creepy. If it were possible to significantly reduce or eliminate the number of negative development experiences entities undergo, I’d probably encourage it.
In that particular case, the equilibrium is less bounded. Butterscotch isn’t able to become better than David or even to desire becoming better than David, and a number of pathways for David’s desire to learn or teach can collapse such that Butterscotch would not be able to become better or desire becoming better than herself.
That’s not really the case the other way around. Someone who wants a mentor that knows more than them has to have an unbounded future in the FiOverse, both for themselves and their mentor.
In the case of intelligence, that’s not that bad. Real-world people tend toward a bounded curve on that, and there are reasons we prefer socializing within a relatively narrow bound downward. Other closed equilibria are more unpleasant. I don’t have the right to say that Lars’ fate is wrong—it at least gets close to the catgirl volcano threshold—but it’s shallow enough to be concerning. This sort of thing isn’t quite wireheading, but it’s close enough to be hard to tell the precise difference.
More generally, some people—quite probably all people—are going to go into the future with hangups. Barring some really massive improvements in philosophy, we may not even know the exacts of those hangups. I’m really hesitant to have a Machine Overlord start zapping neurons to improve things without the permission of the owner’s brains (yes, even recognizing that a sufficiently powerful AI will get the permission it wants).
As a result, that’s going to privilege the values of already-extant entities in ways that I won’t privilege creating new ones: some actions don’t translate through time because of this. I’m hesitant to change David’s (or, once already created, Butterscotch’s) brain against the owner’s will, but since we’re already making Butterscotch’s mind from scratch both the responsibilities and the ethical questions are different.
Me finding some versions creepier than others reflects my personal values, and at least some of those personal values reflect structures that won’t exist in the FiOverse. It’s not as harmful when David talks down to Butterscotch, because she really hasn’t achieved everything he has (and the simulation even gives him easy tools to make sure he’s only teaching her subjects she hasn’t achieved yet), where part of why I find it creepy is because a lot of real-world people assume other folk are less knowledgeable than themselves without good evidence. Self-destructive cycles probably don’t happen under CelestAI’s watch. Lars and his groupies don’t have to worry about unwanted pregnancy, or alcoholism, or anything like that, and at least some of my discomfort comes from those sort of things.
At the same time, I don’t know that I want a universe that doesn’t at least occasionally tempt up beyond or within our comfort zones.
Sorry, I’m not following your first point. The relevant “specific attribute” that sadism and masochism seem to have in this context are that they specifically squick User:gattsuru. If you’re trying to claim something else is objectively bad about them, you’ve not communicated.
Yes, and my comparison stands; you specified a person who valued teaching and protecting people, not someone who valued having the experience of teaching and protecting people. Someone with the former desires isn’t going to be happy if the people they’re teaching don’t get stronger. You seem to be envisaging some maximally perverse hybrid of preference-satisfaction and wireheading, where I don’t actually value really truly teaching someone, but instead of cheaply feeding me delusions, someone’s making actual minds for me to fail to teach!
We are definitely working from very different assumptions here. “stay within a fairly limited set of experiences and values based on their initial valueset” describes, well, anything recognisable as a person. The alternative to that is not a magical being of perfect freedom; it’s being the dude from Permutation City randomly preferring to carve table legs for a century.
I don’t think that’s what we’re given in the story, though. If Butterscotch is made such that she desires self-improvement, then we know that David’s desires cannot in fact collapse in such a way, because otherwise she would have been made differently. Agreed that it’s a problem if the creator is less omniscient, though.
Butterscotch is that person. That is my point about symmetry.
But then—what do you want to happen? Presumably you think it is possible for a Lars to actually exist. But from elsewhere in your comment, you don’t want an outside optimiser to step in and make them less “shallow”, and you seem dubious about even the ability to give consent. Would you deem it more authentic to simulate angst und bange unto the end of time?