Suffering Criticism: An ancestral simulation would recreate a huge amount of suffering.
Response: Humans suffer and live in a world that seems to suffer greatly, and yet very few humans prefer non-existence over their suffering. Evolution culls existential pessimists.
Recreating a past human will recreate their suffering, but it could also grant them an afterlife filled with tremendous joy. The relatively small finite suffering may not add up to much in this consideration. It could even initially relatively enhance subsequent elevation to joyful state, but this is speculative.
Even if the future joy of the recreated past human would outweigh that of the suffering (s)he endured while being recreated, all else being equal it would be even better to create entirely new kinds of people, who wouldn’t need to suffer at all, from scratch.
I know I prefer to exist now. I’d also like to survive for a very long time, indefinitely. I’m also not even sure the person I’ll be 10 or 20 years from now will still be significantly “me”. I’m not sure the closest projection of my self on a system incapable of suffering at all would still be me. Sure I’d prefer not to suffer, but over that, there’s a certain amount of suffering I’m ready to endure if I have to in order to stay alive.
Then on the other side of this question you could consider creating new sentiences who couldn’t suffer at all. But why would these have a priority over those who exist already? Also, what if we created people who could suffer, but who’d be happy with it? Would such a life be worthy? Is the fact that suffering is bad something universal, or a quirk of terran animals neurology? Pain is both sensory information and the way this information is used by our brain. Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.
Then on the other side of this question you could consider creating new sentiences who couldn’t suffer at all. But why would these have a priority over those who exist already?
From the point of view of those who’ll actually create the minds, it’s not a choice between somebody who exists already and a new mind. It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
One might also invoke Big Universe considerations to say that even the “new” kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they’ll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole “this mind has existed once, so it should be given priority over a one that hasn’t” argument doesn’t make a lot of sense.
Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.
Yes. See also David Pearce’s notion of beings who’ve replaced pain and pleasure with gradients of pleasure—instead of having suffering as a feedback mechanism, their feedback mechanism is a lack of pleasure.
Then on the other side of this question you could consider creating new sentiences who couldn’t suffer at all. But why would these have a priority over those who exist already?
From the point of view of those who’ll actually create the minds, it’s not a choice between somebody who exists already and a new mind. It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
I’m proposing to create these minds, if I survive. Many will want this. If we have FAI, it will help me, by its definition.
I would rather live in a future afterlife that has my grandparents in it than your ‘better designs’. Better by whose evaluation? I’d also say that my sense of ‘better’ outweighs any other sense of ‘better’ - my terminal values are my own.
One might also invoke Big Universe considerations to say that even the “new” kind of a mind has already existed in some corner of the universe
I could care less about some corner of the universe that is not casually connected to my corner. The big world stuff isn’t very relevant: this is a decision between two versions of our local future: one with people we love in it, and one without.
Those who will actually create the minds will want to rescue people in the past, so they can reasonably anticipate being rescued themselves. Or differently put, those who create the minds will want the right answer to “should I rescue people or create new people” to be “rescue people”.
There’s a big difference between recreating an intelligence that exists/existed large numbers of lightyears away due to sheer statistical chance, and creating one that verifiably existed with high probability in your own history. I suspect the latter are enough more interesting to be created first. We might move on to creating the populations of interesting alternate histories, as well as randomly selected worlds and so forth down the line.
Beings who only experience gradients of pleasure might be interesting, but since they already likely have access to immortality wherever they exist (being transhuman / posthuman and all) it seems like there is less utility to trying to resurrect them as it would only be a duplication. Naturally evolved beings lacking the capacity for extreme suffering could be interesting, but it’s hard to say how common they would be throughout the universe—thus it would seem unfair to give them a priority in resurrection compared to naturally evolved ones.
There’s a big difference between recreating an intelligence that exists/existed large numbers of lightyears away due to sheer statistical chance, and creating one that verifiably existed with high probability in your own history.
What difference is that?
Beings who only experience gradients of pleasure might be interesting, but since they already likely have access to immortality wherever they exist (being transhuman / posthuman and all) it seems like there is less utility to trying to resurrect them as it would only be a duplication.
I don’t understand what you mean by “only a duplication”.
Naturally evolved beings lacking the capacity for extreme suffering could be interesting, but it’s hard to say how common they would be throughout the universe—thus it would seem unfair to give them a priority in resurrection compared to naturally evolved ones.
This doesn’t make any sense to me.
Suppose that you were to have a biological child in the traditional way, but could select whether to give them genes predisposing them to extreme depression, hyperthymia, or anything in between. Would you say that you should make your choice based on how common each temperament was in the universe, and not based on the impact to the child’s well-being?
There’s a causal connection in one case that is absent in the other, and a correspondingly higher distribution in the pasts of similar worlds.
I don’t understand what you mean by “only a duplication”.
Duplication of effort as well as effect with respect to other parts of the universe. Meaning you are increasing the numbers of immortals and not granting continued life to those who would otherwise be deprived of it.
Suppose that you were to have a biological child in the traditional way, but could select whether to give them genes predisposing them to extreme depression, hyperthymia, or anything in between. Would you say that you should make your choice based on how common each temperament was in the universe, and not based on the impact to the child’s well-being?
We aren’t talking about the creation of random new lives as a matter of reproduction, we’re talking about the resurrection of people who have lived substantial lives already as part of the universe’s natural existence. If you want to resurrect the most people (out of those who have actually existed and died) in order to grant them some redress against death, you are going to have to recreate people who, for physically plausible reasons, would have actually died.
It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
One might also invoke Big Universe considerations to say that even the “new” kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they’ll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole “this mind has existed once, so it should be given priority over a one that hasn’t” argument doesn’t make a lot of sense.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren’t morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
Minds that existed once, and were causally connected to our world in certain ways, should be given priority over minds that have only existed in distant, causally unconnected parts of the Big World.
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
“Clearly the better choice” is stating your conclusion rather than making an argument for it.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren’t morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
There’s an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can’t find out about our decisions and that the extent to which their preferences are satisfied isn’t therefore affected by anything that we do.
One could of course make arguments relating to acausal trade, or suggest that we should try to satisfy even the preferences of beings who never found out about it. But to do that, we would have to know something about the distribution of preferences in the universe. And there our uncertainty is so immense that it’s better to just focus on the preferences of the humans here on Earth.
But in any case, these kinds of considerations don’t seem relevant for the “if we create new minds, should they be similar to minds that have already once existed” question. It’s not like the mind that we’re seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe. Rather, our part of the universe contains information that can be used for creating a mind that resembles an earlier mind, and it also contains information that can be used for creating a more novel mind. When the decision is made, both minds are still non-existent in our part of the universe, and existent in some other.
“Clearly the better choice” is stating your conclusion rather than making an argument for it.
I assumed that the rest of what I wrote made it clear why I thought it was clearly the better choice.
There’s an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can’t find out about our decisions
If that was the reason then people would feel the same about causally connected entities who can’t find out about our decisions. But they don’t. People generally consider it bad to spread rumors about people, even if they never find out. We also consider it immoral to ruin the reputation of dead people, even though we can’t find out.
I think a better explanation for this intuition is simply that we have a bedrock moral principle to discount dissatisfied preferences unless they are about a person’s own life. Parfit argues similarly here.
This principle also explains other intuitive reactions people have. For instance, in this problem given by Stephen Landsburg, people tend to think the rape victim has been harmed, but that McCrankypants and McMustardseed haven’t been. This can be explained if we consider that the preference the victim had was about her life, whereas the preference of the other two wasn’t.
Just as we discount preference violations on a personal level that aren’t about someone’s own life, so we can discount the existence of distant populations that do not impact the one we are a part of.
and that the extent to which their preferences are satisfied isn’t therefore affected by anything that we do.
Just because someone never discovers their preference isn’t satisfied, doesn’t make it any less unsatisfied. Preferences are about desiring one world state over another, not about perception. If someone makes the world different then the way you want it to be then your preference is unsatisfied, even if you never find out.
Of course, as I said before, if said preference is not about one’s own life in some way we can probably discount it.
It’s not like the mind that we’re seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe.
Yes it does, if you think four-dimensionally. The mind we’re seeking to recreate exists in our universe’s past, whereas the novel mind does not.
People sometimes take actions because a dead friend or relative would have wanted them to. We also take action to satisfy the preferences of people who are certain to exist in the future. This indicates that we do indeed continue to value preferences that aren’t in existence at this very moment.
It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
Still I wonder then, what could I do, to enhance my probability of being resurrected if worse comes to worse and I can’t manage to stay alive to protect and ensure the posterity of my own current self if I am not one of those better minds (according to which values though?)
I realize that this probably won’t be very useful advice for you, but I’d recommend working on letting go of the sense of having a lasting self in the first place. Not that I’d fully alieve in that yet either, but the closer that I’ve gotten to always alieving it, the less I’ve felt like I have reason to worry about (not) living forever. Me possibly dying in fourty years is no big deal if I don’t even think I’m the same person tomorrow, or five minutes from now.
Me possibly dying in fourty years is no big deal if I don’t even think I’m the same person tomorrow, or five minutes from now.
You’re confusing two meaning of the word “the same.” When we refer to a person as “the same” that doesn’t mean they haven’t changed, it means that they’ve changed in some ways, but not in others.
If you define “same” as “totally unchanging” then I don’t want to be the same person five minutes from now. Being frozen in time forever so I’d never change would be tantamount to death. There are some ways I want to change, like acquiring new skills and memories.
But there are other ways I don’t want to change. I want my values to stay the same, and I want to remember my life. If I change in that way this is bad. It doesn’t matter if this is done in an abrupt way, like dying, or a slow way, like an FAI gradually turning me into a different person.
If people change in undesirable ways, then it is a good thing to restore them through resurrection. I want to be resurrected if I need to be. And I want you to be resurrected to. Because the parts of you that shouldn’t change are valuable, even if you’ve convinced yourself they’re not.
You’re confusing two meaning of the word “the same.” When we refer to a person as “the same” that doesn’t mean they haven’t changed, it means that they’ve changed in some ways, but not in others.
Sure, I’m aware of that. But the bit that you quoted didn’t make claims about what “the same” means in any objective sense—it only said that if you choose your definition of “the same” appropriately, then you can stop worrying about your long-term survival and thus feel better. (At least that’s how it worked for me: I used to worry about my long-term survival a lot more when I still found personal identity to be a meaningful concept.)
I’ve pondered this some, and it seems that the best strategy in distant historical eras was just to be famous, and more specifically to write an autobiography. Having successful ancestors also seems to grow in importance as we get into the modern era. For us today we have cryonics of course, and being succesful/famous/wealthy is obviously viable, but blogging is probably to be recommended as well.
The first people to become immortal and to be able to simulate others, will want to simulate (“revive”) their own loved ones who died just before immortality was developed.
These people, once resurrected and integrated into society, will themselves want to resurrect their own loved ones who died a little earlier than that.
And so on until most, if not all, of humanity is simulated.
An interesting consequence of this is historical drift: my recreation of my father would differ somewhat from reality, my grandfather more so, and so on. This wouldn’t be a huge concern for any of us though, as we wouldn’t be able to tell the difference. As long as the reconstructions pass interpersonal turing tests, all is good.
I am disappointed that this has not spawned more principled objections. Morally speaking, the creating people from scratch is far, far worse than resurrecting existing people, even if the existing people experience some suffering in the course of the resurrection.
Your entire argument seems to be based on the “Impersonal Total Principle;” an ethical principle that states that all that matters is the total amount of positive and negative experiences in the world, other factors like the identity of the people having those experiences are not ethically important. I consider this principle to be both wrong and gravely immoral and will explain why in detail below.
When developing moral principles what we typically do is take certain moral intuitions we have, assume that they are being generated by some sort of overarching moral principle, then try to figure out what that principle is. If the principle is correct (or at least a step in the right direction) then other moral intuitions will probably also generate it, if it isn’t then they probably won’t.
The IPT was developed by Derek Parfit as a proposed solution to the Nonidentity problem. It happens to give the intuitively correct answer to the problem, but generates so many wrong answers in so many other scenarios that I believe it is obviously wrong.
For instance, the Nonidentity Problem has an instance where one child’s life will be better than the other because of reduced capabilities. I came up with a version of the problem where the children have the same capabilities, but one has a worse life than the other because they have more ambitious preferences that are harder to satisfy. In that instance it doesn’t seem obvious at all to me that we should chose the one with the better life. Plus, imagine an iteration of the NIP where the choice is unhealthy triplets or a healthy child. I think most people would agree that a woman who picks unhealthy triplets is doing something even worse than the woman who picks one unhealthy child in the original NIP. But according to the IPT she’s done something better.
Then there are issues like the fact that the IPT suggests there’s nothing wrong with someone dying if a new person is created to replace them who will have as good a life as they did. And of course, there is the repugnant conclusion.
But I think the nail in the coffin for IPT is that people seem to accept the Sadistic Conclusion. People regularly harm themselves and others in order to avoid having more children, and they seem to regard this as a moral duty, not a selfish one.
So IPT is wrong. What do I propose to replace it? Not average utilitarianism, that’s just as crazy. Rather, I’d replace it with a principle that a small population with higher utility per person is generally better than a large population with lower utility per person, even if the total amount of utility is larger.
Now, I understand you’re a personal identity skeptic. That’s okay. I’m perfectly willing to translate this principle into phrasing that makes no mention of “persons” or people being “the same.” Here goes: It is better to create sets of experiences that are linked in certain ways (ie, memory, personality, etc.). It is better to create experiences that are linked in this way, even if the total amount of positive experiences is lower because of this. It may even be better to create some amount of negative experiences if doing so allows you to make sure more of the experience sets are linked in certain ways.
So there you have it. I completely totally reject the moral principle you base your argument on. It is a terrible principle that does not derive from human moral intuitions at all. Everyone should reject it.
I also want to respond to the other points you’ve made in this thread but this is getting long, so I’ll reply to them separately.
Your entire argument seems to be based on the “Impersonal Total Principle;” an ethical principle that states that all that matters is the total amount of positive and negative experiences in the world, other factors like the identity of the people having those experiences are not ethically important.
Your wording suggests that I would assume the ITP, which would then imply rejecting the value of identity. But actually my reasoning goes in the other direction: since I don’t find personal identity to correspond to anything fundamental, my rejection of it causes me to arrive at something ITP-like. But note that I would not say that my rejection of personal identity necessarily implies ITP: “the total amount of positive and negative experience is all that matters” is a much stronger claim than a mere “personal identity doesn’t matter”. I have only made the latter claim, not the former.
That said, I’m not necessarily rejecting the ITP either. It does seem like a relatively reasonable claim, but that’s more because I’m skeptical about the alternatives for ITP than because ITP itself would feel that strongly convincing.
I came up with a version of the problem where the children have the same capabilities, but one has a worse life than the other because they have more ambitious preferences that are harder to satisfy. In that instance it doesn’t seem obvious at all to me that we should chose the one with the better life.
To me, ambitious preferences sound like a possible good thing because they might lead to the world becoming better off on net. “The reasonable man adapts himself to his environment. The unreasonable man adapts his environment to himself. All progress is therefore dependent upon the unreasonable man.” That does provide a possible reason to prefer the child with the more ambitious preferences, if the net outcome for the world as a whole could be expected to be positive. But if it can’t, then it seems obvious to me that we should prefer creating the non-ambitious child.
Then there are issues like the fact that the IPT suggests there’s nothing wrong with someone dying if a new person is created to replace them who will have as good a life as they did.
Even if we accepted IPT, we would still have good reasons to prefer not killing existing people: namely that society works much better and with much lower levels of stress and fear if everyone has strong guarantees that society puts a high value on preserving their lives. Knowing that you might be killed at any moment doesn’t do wonders for your mental health.
And of course, there is the repugnant conclusion.
I stopped consdering the Repugnant Conclusion a problem after reading John Maxwell’s, Michael Sullivan’s and Eliezer’s comments to your “Mere Cable Channel Addition Paradox” post. And even if I hadn’t been convinced by those, I also lean strongly towards negative utilitarianism, which also avoids the Repugnant Conclusion.
Here goes: It is better to create sets of experiences that are linked in certain ways (ie, memory, personality, etc.). It is better to create experiences that are linked in this way, even if the total amount of positive experiences is lower because of this. It may even be better to create some amount of negative experiences if doing so allows you to make sure more of the experience sets are linked in certain ways.
While this phrasing indeed doesn’t make any mention of “persons”, it still seems to me primarily motivated by a desire to create a moral theory based on persons. If not, demanding the “link” criteria seems like an arbitrary decision.
Your wording suggests that I would assume the ITP, which would then imply rejecting the value of identity. But actually my reasoning goes in the other direction: since I don’t find personal identity to correspond to anything fundamental, my rejection of it causes me to arrive at something ITP-like. But note that I would not say that my rejection of personal identity necessarily implies ITP: “the total amount of positive and negative experience is all that matters” is a much stronger claim than a mere “personal identity doesn’t matter”. I have only made the latter claim, not the former.
I have the same reductionist views of personal identity as you. I completely agree that it isn’t ontologically fundamental or anything like that. The difference between us is that when you concluded it wasn’t ontologically fundamental you stopped caring about it. I, by contrast, just replaced the symbol with what it stood for. I figured out what it was that we meant by “personal identity” and concluded that that was what I had really cared about all along.
That does provide a possible reason to prefer the child with the more ambitious preferences, if the net outcome for the world as a whole could be expected to be positive. But if it can’t, then it seems obvious to me that we should prefer creating the non-ambitious child.
I can’t agree with this. If I had the choice between a wireheaded child who lived a life of perfect passive bliss, or a child who spent their life scientifically studying nature (but lived a hermitlike existence so their discoveries wouldn’t benefit others), I would pick the second child, even if they endured many hardships the wirehead would not. I would also prefer not to be wireheaded, even if the wireheaded me would have an easier life.
When considering creating people who have different life goals, my first objective is of course, making sure both of those people would live lives worth living. But if the answer is yes for both of them then my decision would be based primarily on whose life goals were more in line with my ideals about what humanity should try to be, rather than whose life would be easier.
I suppose I am advocating something like G.E. Moore’s Ideal Utilitarianism, except instead of trying to maximize ideals directly I am advocating creating people who care about those ideals and then maximizing their utility.
Even if we accepted IPT, we would still have good reasons to prefer not killing existing people: namely that society works much better and with much lower levels of stress and fear if everyone has strong guarantees that society puts a high value on preserving their lives.
I agree, but I also think killing and replacing is wrong in principle.
I stopped consdering the Repugnant Conclusion a problem after reading John Maxwell’s, Michael Sullivan’s and Eliezer’s comments to your “Mere Cable Channel Addition Paradox” post.
I did too, but then I realized I was making a mistake. I realized that the problem with the RC was in it’s premises, not it’s practicality. I ultimately realized that the Mere Addition Principle was false, and that that is what is wrong with the RC.
While this phrasing indeed doesn’t make any mention of “persons”, it still seems to me primarily motivated by a desire to create a moral theory based on persons.
No, it is motivated a desire to create a moral theory that accurately maps what I morally value, and I consider the types of relationships we commonly refer to as “personal identity” to be more morally valuable than pretty much anything. Would you rather I devise a moral theory based on stuff I didn’t consider morally valuable?
If not, demanding the “link” criteria seems like an arbitrary decision.
You can make absolutely anything sound arbitrary if you use the right rhetoric. All you have to do is take the thing that I care about, find a category it shares with things I don’t care about nearly as much, and then ask me why I am arbitrarily caring for one thing over the other even though they are in the same category.
For instance, I could say “Pain and pleasure are both brain states. It’s ridiculously arbitrary to care about one brain state over another, when they are all just states that occur in your brain. You should be more inclusive and less arbitrary. Now please climb into that iron maiden.”
I believe personal identity is one of the cornerstones of morality, whether you call it by that name, or replace the name with the things it stands for. I don’t consider it arbitrary at all.
No, it is motivated a desire to create a moral theory that accurately maps what I morally value, and I consider the types of relationships we commonly refer to as “personal identity” to be more morally valuable than pretty much anything. Would you rather I devise a moral theory based on stuff I didn’t consider morally valuable?
Of course you should devise a moral theory based on what you consider morally valuable; it just fails to be persuasive to me, since it appeals to moral intuitions that I do not share (and which thus strike me as arbitrary).
Continued debate in this thread doesn’t seem very productive to me, since all of our disagreement seems to come down to differing sets of moral intuitions / terminal values. So there’s not very much to be said beyond “I think that X is valuable” and “I disagree”.
Continued debate in this thread doesn’t seem very productive to me, since all of our disagreement seems to come down to differing sets of moral intuitions / terminal values.
You’re probably right.
EDIT: However, I do think you should consider if your moral intuitions really are different, or if you’ve somehow shut some important intuitions off by use of the “make anything arbitrary” rhetorical strategy I described earlier.
Also, I should clarify that while I disapprove of the normative conclusions you’ve drawn from personal identity skepticism, I don’t see any inherent problem with using it to improve your mental health in the way you described (when you said that it decreased your anxiety about death). If your emotional systems are out of control and torturing you with excessive anxiety I don’t see any reason why you shouldn’t try a mental trick like that to treat it.
Even if the future joy of the recreated past human would outweigh that of the suffering (s)he endured while being recreated, all else being equal it would be even better to create entirely new kinds of people, who wouldn’t need to suffer at all, from scratch.
I know I prefer to exist now. I’d also like to survive for a very long time, indefinitely. I’m also not even sure the person I’ll be 10 or 20 years from now will still be significantly “me”. I’m not sure the closest projection of my self on a system incapable of suffering at all would still be me. Sure I’d prefer not to suffer, but over that, there’s a certain amount of suffering I’m ready to endure if I have to in order to stay alive.
Then on the other side of this question you could consider creating new sentiences who couldn’t suffer at all. But why would these have a priority over those who exist already? Also, what if we created people who could suffer, but who’d be happy with it? Would such a life be worthy? Is the fact that suffering is bad something universal, or a quirk of terran animals neurology? Pain is both sensory information and the way this information is used by our brain. Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.
From the point of view of those who’ll actually create the minds, it’s not a choice between somebody who exists already and a new mind. It’s the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
One might also invoke Big Universe considerations to say that even the “new” kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they’ll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole “this mind has existed once, so it should be given priority over a one that hasn’t” argument doesn’t make a lot of sense.
Yes. See also David Pearce’s notion of beings who’ve replaced pain and pleasure with gradients of pleasure—instead of having suffering as a feedback mechanism, their feedback mechanism is a lack of pleasure.
I’m proposing to create these minds, if I survive. Many will want this. If we have FAI, it will help me, by its definition.
I would rather live in a future afterlife that has my grandparents in it than your ‘better designs’. Better by whose evaluation? I’d also say that my sense of ‘better’ outweighs any other sense of ‘better’ - my terminal values are my own.
I could care less about some corner of the universe that is not casually connected to my corner. The big world stuff isn’t very relevant: this is a decision between two versions of our local future: one with people we love in it, and one without.
Those who will actually create the minds will want to rescue people in the past, so they can reasonably anticipate being rescued themselves. Or differently put, those who create the minds will want the right answer to “should I rescue people or create new people” to be “rescue people”.
There’s a big difference between recreating an intelligence that exists/existed large numbers of lightyears away due to sheer statistical chance, and creating one that verifiably existed with high probability in your own history. I suspect the latter are enough more interesting to be created first. We might move on to creating the populations of interesting alternate histories, as well as randomly selected worlds and so forth down the line.
Beings who only experience gradients of pleasure might be interesting, but since they already likely have access to immortality wherever they exist (being transhuman / posthuman and all) it seems like there is less utility to trying to resurrect them as it would only be a duplication. Naturally evolved beings lacking the capacity for extreme suffering could be interesting, but it’s hard to say how common they would be throughout the universe—thus it would seem unfair to give them a priority in resurrection compared to naturally evolved ones.
What difference is that?
I don’t understand what you mean by “only a duplication”.
This doesn’t make any sense to me.
Suppose that you were to have a biological child in the traditional way, but could select whether to give them genes predisposing them to extreme depression, hyperthymia, or anything in between. Would you say that you should make your choice based on how common each temperament was in the universe, and not based on the impact to the child’s well-being?
There’s a causal connection in one case that is absent in the other, and a correspondingly higher distribution in the pasts of similar worlds.
Duplication of effort as well as effect with respect to other parts of the universe. Meaning you are increasing the numbers of immortals and not granting continued life to those who would otherwise be deprived of it.
We aren’t talking about the creation of random new lives as a matter of reproduction, we’re talking about the resurrection of people who have lived substantial lives already as part of the universe’s natural existence. If you want to resurrect the most people (out of those who have actually existed and died) in order to grant them some redress against death, you are going to have to recreate people who, for physically plausible reasons, would have actually died.
If the modeled mind is the same person as the mind that existed once, it is clearly the better choice. And by same person I of course mean that it is related to a preexisting mind in certain ways.
We seem too have a moral intuition that things that occur in far distant parts of the universe that have no causal connection to us aren’t morally relevant. You seem to think that this intuition is a side-effect of the population ethics principle you seem to believe in (the Impersonal Total Principle). However, I would argue that it is a direct, terminal value.
Evidence for my view is the fact that we tend to also discount the desires of causally unconnected people in distant parts of the universe in nonpopulation ethics situations. For instance, when discussing whether to pave over a forest, we think the desires of those who live near the forest should be considered. However, we do not think the desires of the vast amount of Forest Maximizing AIs who doubtless exist out there should be considered, even though there is likely some part of the Big World they exist in.
Minds that existed once, and were causally connected to our world in certain ways, should be given priority over minds that have only existed in distant, causally unconnected parts of the Big World.
“Clearly the better choice” is stating your conclusion rather than making an argument for it.
There’s an obvious reason for discounting the preferences of causally unconnected entities: if they really are causally unconnected, that means that they can’t find out about our decisions and that the extent to which their preferences are satisfied isn’t therefore affected by anything that we do.
One could of course make arguments relating to acausal trade, or suggest that we should try to satisfy even the preferences of beings who never found out about it. But to do that, we would have to know something about the distribution of preferences in the universe. And there our uncertainty is so immense that it’s better to just focus on the preferences of the humans here on Earth.
But in any case, these kinds of considerations don’t seem relevant for the “if we create new minds, should they be similar to minds that have already once existed” question. It’s not like the mind that we’re seeking to recreate already exists within our part of the universe and has a preference for being (re-)created, while a novel mind that also has a preference for being (re-)created exists in some other part of the universe. Rather, our part of the universe contains information that can be used for creating a mind that resembles an earlier mind, and it also contains information that can be used for creating a more novel mind. When the decision is made, both minds are still non-existent in our part of the universe, and existent in some other.
I assumed that the rest of what I wrote made it clear why I thought it was clearly the better choice.
If that was the reason then people would feel the same about causally connected entities who can’t find out about our decisions. But they don’t. People generally consider it bad to spread rumors about people, even if they never find out. We also consider it immoral to ruin the reputation of dead people, even though we can’t find out.
I think a better explanation for this intuition is simply that we have a bedrock moral principle to discount dissatisfied preferences unless they are about a person’s own life. Parfit argues similarly here.
This principle also explains other intuitive reactions people have. For instance, in this problem given by Stephen Landsburg, people tend to think the rape victim has been harmed, but that McCrankypants and McMustardseed haven’t been. This can be explained if we consider that the preference the victim had was about her life, whereas the preference of the other two wasn’t.
Just as we discount preference violations on a personal level that aren’t about someone’s own life, so we can discount the existence of distant populations that do not impact the one we are a part of.
Just because someone never discovers their preference isn’t satisfied, doesn’t make it any less unsatisfied. Preferences are about desiring one world state over another, not about perception. If someone makes the world different then the way you want it to be then your preference is unsatisfied, even if you never find out.
Of course, as I said before, if said preference is not about one’s own life in some way we can probably discount it.
Yes it does, if you think four-dimensionally. The mind we’re seeking to recreate exists in our universe’s past, whereas the novel mind does not.
People sometimes take actions because a dead friend or relative would have wanted them to. We also take action to satisfy the preferences of people who are certain to exist in the future. This indicates that we do indeed continue to value preferences that aren’t in existence at this very moment.
Still I wonder then, what could I do, to enhance my probability of being resurrected if worse comes to worse and I can’t manage to stay alive to protect and ensure the posterity of my own current self if I am not one of those better minds (according to which values though?)
I realize that this probably won’t be very useful advice for you, but I’d recommend working on letting go of the sense of having a lasting self in the first place. Not that I’d fully alieve in that yet either, but the closer that I’ve gotten to always alieving it, the less I’ve felt like I have reason to worry about (not) living forever. Me possibly dying in fourty years is no big deal if I don’t even think I’m the same person tomorrow, or five minutes from now.
You’re confusing two meaning of the word “the same.” When we refer to a person as “the same” that doesn’t mean they haven’t changed, it means that they’ve changed in some ways, but not in others.
If you define “same” as “totally unchanging” then I don’t want to be the same person five minutes from now. Being frozen in time forever so I’d never change would be tantamount to death. There are some ways I want to change, like acquiring new skills and memories.
But there are other ways I don’t want to change. I want my values to stay the same, and I want to remember my life. If I change in that way this is bad. It doesn’t matter if this is done in an abrupt way, like dying, or a slow way, like an FAI gradually turning me into a different person.
If people change in undesirable ways, then it is a good thing to restore them through resurrection. I want to be resurrected if I need to be. And I want you to be resurrected to. Because the parts of you that shouldn’t change are valuable, even if you’ve convinced yourself they’re not.
Sure, I’m aware of that. But the bit that you quoted didn’t make claims about what “the same” means in any objective sense—it only said that if you choose your definition of “the same” appropriately, then you can stop worrying about your long-term survival and thus feel better. (At least that’s how it worked for me: I used to worry about my long-term survival a lot more when I still found personal identity to be a meaningful concept.)
I’ve pondered this some, and it seems that the best strategy in distant historical eras was just to be famous, and more specifically to write an autobiography. Having successful ancestors also seems to grow in importance as we get into the modern era. For us today we have cryonics of course, and being succesful/famous/wealthy is obviously viable, but blogging is probably to be recommended as well.
The first people to become immortal and to be able to simulate others, will want to simulate (“revive”) their own loved ones who died just before immortality was developed.
These people, once resurrected and integrated into society, will themselves want to resurrect their own loved ones who died a little earlier than that.
And so on until most, if not all, of humanity is simulated.
Yes this.
An interesting consequence of this is historical drift: my recreation of my father would differ somewhat from reality, my grandfather more so, and so on. This wouldn’t be a huge concern for any of us though, as we wouldn’t be able to tell the difference. As long as the reconstructions pass interpersonal turing tests, all is good.
I am disappointed that this has not spawned more principled objections. Morally speaking, the creating people from scratch is far, far worse than resurrecting existing people, even if the existing people experience some suffering in the course of the resurrection.
Your entire argument seems to be based on the “Impersonal Total Principle;” an ethical principle that states that all that matters is the total amount of positive and negative experiences in the world, other factors like the identity of the people having those experiences are not ethically important. I consider this principle to be both wrong and gravely immoral and will explain why in detail below.
When developing moral principles what we typically do is take certain moral intuitions we have, assume that they are being generated by some sort of overarching moral principle, then try to figure out what that principle is. If the principle is correct (or at least a step in the right direction) then other moral intuitions will probably also generate it, if it isn’t then they probably won’t.
The IPT was developed by Derek Parfit as a proposed solution to the Nonidentity problem. It happens to give the intuitively correct answer to the problem, but generates so many wrong answers in so many other scenarios that I believe it is obviously wrong.
For instance, the Nonidentity Problem has an instance where one child’s life will be better than the other because of reduced capabilities. I came up with a version of the problem where the children have the same capabilities, but one has a worse life than the other because they have more ambitious preferences that are harder to satisfy. In that instance it doesn’t seem obvious at all to me that we should chose the one with the better life. Plus, imagine an iteration of the NIP where the choice is unhealthy triplets or a healthy child. I think most people would agree that a woman who picks unhealthy triplets is doing something even worse than the woman who picks one unhealthy child in the original NIP. But according to the IPT she’s done something better.
Then there are issues like the fact that the IPT suggests there’s nothing wrong with someone dying if a new person is created to replace them who will have as good a life as they did. And of course, there is the repugnant conclusion.
But I think the nail in the coffin for IPT is that people seem to accept the Sadistic Conclusion. People regularly harm themselves and others in order to avoid having more children, and they seem to regard this as a moral duty, not a selfish one.
So IPT is wrong. What do I propose to replace it? Not average utilitarianism, that’s just as crazy. Rather, I’d replace it with a principle that a small population with higher utility per person is generally better than a large population with lower utility per person, even if the total amount of utility is larger.
Now, I understand you’re a personal identity skeptic. That’s okay. I’m perfectly willing to translate this principle into phrasing that makes no mention of “persons” or people being “the same.” Here goes: It is better to create sets of experiences that are linked in certain ways (ie, memory, personality, etc.). It is better to create experiences that are linked in this way, even if the total amount of positive experiences is lower because of this. It may even be better to create some amount of negative experiences if doing so allows you to make sure more of the experience sets are linked in certain ways.
So there you have it. I completely totally reject the moral principle you base your argument on. It is a terrible principle that does not derive from human moral intuitions at all. Everyone should reject it.
I also want to respond to the other points you’ve made in this thread but this is getting long, so I’ll reply to them separately.
Your wording suggests that I would assume the ITP, which would then imply rejecting the value of identity. But actually my reasoning goes in the other direction: since I don’t find personal identity to correspond to anything fundamental, my rejection of it causes me to arrive at something ITP-like. But note that I would not say that my rejection of personal identity necessarily implies ITP: “the total amount of positive and negative experience is all that matters” is a much stronger claim than a mere “personal identity doesn’t matter”. I have only made the latter claim, not the former.
That said, I’m not necessarily rejecting the ITP either. It does seem like a relatively reasonable claim, but that’s more because I’m skeptical about the alternatives for ITP than because ITP itself would feel that strongly convincing.
To me, ambitious preferences sound like a possible good thing because they might lead to the world becoming better off on net. “The reasonable man adapts himself to his environment. The unreasonable man adapts his environment to himself. All progress is therefore dependent upon the unreasonable man.” That does provide a possible reason to prefer the child with the more ambitious preferences, if the net outcome for the world as a whole could be expected to be positive. But if it can’t, then it seems obvious to me that we should prefer creating the non-ambitious child.
Even if we accepted IPT, we would still have good reasons to prefer not killing existing people: namely that society works much better and with much lower levels of stress and fear if everyone has strong guarantees that society puts a high value on preserving their lives. Knowing that you might be killed at any moment doesn’t do wonders for your mental health.
I stopped consdering the Repugnant Conclusion a problem after reading John Maxwell’s, Michael Sullivan’s and Eliezer’s comments to your “Mere Cable Channel Addition Paradox” post. And even if I hadn’t been convinced by those, I also lean strongly towards negative utilitarianism, which also avoids the Repugnant Conclusion.
While this phrasing indeed doesn’t make any mention of “persons”, it still seems to me primarily motivated by a desire to create a moral theory based on persons. If not, demanding the “link” criteria seems like an arbitrary decision.
I have the same reductionist views of personal identity as you. I completely agree that it isn’t ontologically fundamental or anything like that. The difference between us is that when you concluded it wasn’t ontologically fundamental you stopped caring about it. I, by contrast, just replaced the symbol with what it stood for. I figured out what it was that we meant by “personal identity” and concluded that that was what I had really cared about all along.
I can’t agree with this. If I had the choice between a wireheaded child who lived a life of perfect passive bliss, or a child who spent their life scientifically studying nature (but lived a hermitlike existence so their discoveries wouldn’t benefit others), I would pick the second child, even if they endured many hardships the wirehead would not. I would also prefer not to be wireheaded, even if the wireheaded me would have an easier life.
When considering creating people who have different life goals, my first objective is of course, making sure both of those people would live lives worth living. But if the answer is yes for both of them then my decision would be based primarily on whose life goals were more in line with my ideals about what humanity should try to be, rather than whose life would be easier.
I suppose I am advocating something like G.E. Moore’s Ideal Utilitarianism, except instead of trying to maximize ideals directly I am advocating creating people who care about those ideals and then maximizing their utility.
I agree, but I also think killing and replacing is wrong in principle.
I did too, but then I realized I was making a mistake. I realized that the problem with the RC was in it’s premises, not it’s practicality. I ultimately realized that the Mere Addition Principle was false, and that that is what is wrong with the RC.
No, it is motivated a desire to create a moral theory that accurately maps what I morally value, and I consider the types of relationships we commonly refer to as “personal identity” to be more morally valuable than pretty much anything. Would you rather I devise a moral theory based on stuff I didn’t consider morally valuable?
You can make absolutely anything sound arbitrary if you use the right rhetoric. All you have to do is take the thing that I care about, find a category it shares with things I don’t care about nearly as much, and then ask me why I am arbitrarily caring for one thing over the other even though they are in the same category.
For instance, I could say “Pain and pleasure are both brain states. It’s ridiculously arbitrary to care about one brain state over another, when they are all just states that occur in your brain. You should be more inclusive and less arbitrary. Now please climb into that iron maiden.”
I believe personal identity is one of the cornerstones of morality, whether you call it by that name, or replace the name with the things it stands for. I don’t consider it arbitrary at all.
Of course you should devise a moral theory based on what you consider morally valuable; it just fails to be persuasive to me, since it appeals to moral intuitions that I do not share (and which thus strike me as arbitrary).
Continued debate in this thread doesn’t seem very productive to me, since all of our disagreement seems to come down to differing sets of moral intuitions / terminal values. So there’s not very much to be said beyond “I think that X is valuable” and “I disagree”.
You’re probably right.
EDIT: However, I do think you should consider if your moral intuitions really are different, or if you’ve somehow shut some important intuitions off by use of the “make anything arbitrary” rhetorical strategy I described earlier.
Also, I should clarify that while I disapprove of the normative conclusions you’ve drawn from personal identity skepticism, I don’t see any inherent problem with using it to improve your mental health in the way you described (when you said that it decreased your anxiety about death). If your emotional systems are out of control and torturing you with excessive anxiety I don’t see any reason why you shouldn’t try a mental trick like that to treat it.