Ignoring the fact that replacement tend to be expensive, I’d consider them equal utility if I believed in personal identity. I don’t, so not only are the equally good, they are, for all intents and purposes, the same choice.
I don’t think there’s any fundamental connection between past and future iterations of the same person. You die and are replaced by someone else every moment. Extending your life and replacing you are the same thing.
I don’t need to posit any metaphysical principle; my best model of the universe (at a certain granularity) includes “agents” composed of different mind-states across different times, with very similar architecture and goals, connected by memory to one another and coordinating their actions.
At present, when mind-copying technology doesn’t exist, there’s an extremely strong connection exhibited by the mind-states that occupy a given cranium at different times, much stronger than that exhibited by any two mind-states that occupy different crania. (This shouldn’t be taken naively- I and my past self might disagree on many propositions that my current self and you would agree on- but there’s still an architectural commonality between my present and past mind-states, that’s unmistakably stronger than that between mine and yours.)
Essentially, grouping together mind-states into agents in this way carves reality at its proper joints, especially for purposes of deciding on actions now that will satisfy my current goals for future world-states.
Essentially, grouping together mind-states into agents in this way carves reality at its proper joints
So does specifying rubes and bleggs. This is what I mean by there being nothing fundamentally separating them. It might matter whether it’s red or blue, or whether it’s a cube or an egg, but it can’t possibly matter whether it’s a rube or a blegg, because it isn’t a rube or a blegg.
At present, there aren’t any truly intermediate cases, so “agents with an identity over time” are useful concepts to include in our models; if all red objects in a domain are cubic and contain vanadium, “rube” becomes a useful concept.
In futures where mind-copying and mind-engineering become plentiful, this regularity will no longer be the case, and our decision theories will need to incorporate more exotic kinds of “agents” in order to be successful. I’m not talking about agents being fundamental- they aren’t- just that they’re tremendously useful components of certain approximations, like the wings of the airplane in a simulator.
Even if a concept isn’t fundamental, that doesn’t mean you should exclude it from every model. Check instead to see whether it pays rent.
You argued that a concept “isn’t fundamental”, because in principle it’s possible to construct things gradually escaping the current natural category, and therefore it’s morally unimportant. Can you give an example of a morally important category?
Sorry, but my moral valuations aren’t up for grabs. I’m not perfectly selfish, but neither am I perfectly altruistic; I care more about the welfare of agents more like me, and particularly about the welfare of agents who happen to remember having been me. That valuation has been drummed into my brain pretty thoroughly by evolution, and it may well survive in any extrapolation.
But at this point, I think we’ve passed the productive stage of this particular discussion.
There is nothing morally important about remembering being someone. There’s no reason there has to be the same probability of being you and being one of the people you remember being. Memory exists, but it’s not relevant.
I find this odd because it sounds like the exact opposite of the patternist view of identity
It’s the lack of the patternist view of identity. I have no view of identity, so I disagree.
Would you not mind then if some process erased all of your memories?
It would be likely to cause problems, but beyond that, no. I don’t see why losing your memory would be intrinsically bad.
I think the main thing I’m against is that any of this is fundamental enough to have any effect on anthropics. Erasing your memory and replacing it with someone else’s who’s still alive won’t make it half as likely to be you, just because there’s only a 50% chance of going from past him to you. Erasing your memory every day won’t make it tens of thousands of times as likely to be one of them, on the basis that now you’re tens of thousands of people.
You could, in principle, have memory mentioned in your utility function, but it’s not like it’s the end of the world if someone dies. I mean that in the sense that existance ceases for them or something like that. You could still consider it bad enough to warrant the phrase “it’s like the end of the world”.
Don’t know if I would call a mind-state a person, persons usually respond to things, think and so on, a mind state can’t do any of that. It’s somewhat like saying “a movie is made of little separate movies” when it’s actually separate frames. And
death implies the end of a person not a mind-state. It might be a bit silly of me to make all this fuss about definitions but it’s already a quite messy subject, let’s not make it any messier.
Fine, there’s no fundamental connection between separate mind-states. Personhood can be defined (mostly), but it’s not fundamentally important whether or not two given mind-states are connected by a person. All that matters is the mind-states, whether you’re talking about morality or anthropics.
All this is of course very speculative but couldn’t you just reduce mind-states into sub-mind-states? If you look at split brain patients, where you have cut off corpus callosum, the two hemispheres behave/report in some situations as if they were two different people, it seems (at least to me) that there does not seem to be such irreducible quanta as “brain-states” either. My point is that you could make the same argument:
It’s not fundamentally important whether or not two given sub-mind-states are connected by a mind state. All that matters is the sub-mind-states.
It seems to me that my qualia are all experienced together, or at least the ones that I’m aware of. As such, there is more than just sub-mind-states. There is a fundamental difference. For what it’s worth, I don’t consider this difference morally relevant, but it’s there.
Ignoring the fact that replacement tend to be expensive, I’d consider them equal utility if I believed in personal identity. I don’t, so not only are the equally good, they are, for all intents and purposes, the same choice.
Downvoted for using such an ill defined word as personal identity, without additional specification.
I don’t think there’s any fundamental connection between past and future iterations of the same person. You die and are replaced by someone else every moment. Extending your life and replacing you are the same thing.
I don’t need to posit any metaphysical principle; my best model of the universe (at a certain granularity) includes “agents” composed of different mind-states across different times, with very similar architecture and goals, connected by memory to one another and coordinating their actions.
Exactly what changes if you remove the “agents”, and just have mind-states that happen to have similar architecture and goals?
At present, when mind-copying technology doesn’t exist, there’s an extremely strong connection exhibited by the mind-states that occupy a given cranium at different times, much stronger than that exhibited by any two mind-states that occupy different crania. (This shouldn’t be taken naively- I and my past self might disagree on many propositions that my current self and you would agree on- but there’s still an architectural commonality between my present and past mind-states, that’s unmistakably stronger than that between mine and yours.)
Essentially, grouping together mind-states into agents in this way carves reality at its proper joints, especially for purposes of deciding on actions now that will satisfy my current goals for future world-states.
So does specifying rubes and bleggs. This is what I mean by there being nothing fundamentally separating them. It might matter whether it’s red or blue, or whether it’s a cube or an egg, but it can’t possibly matter whether it’s a rube or a blegg, because it isn’t a rube or a blegg.
At present, there aren’t any truly intermediate cases, so “agents with an identity over time” are useful concepts to include in our models; if all red objects in a domain are cubic and contain vanadium, “rube” becomes a useful concept.
In futures where mind-copying and mind-engineering become plentiful, this regularity will no longer be the case, and our decision theories will need to incorporate more exotic kinds of “agents” in order to be successful. I’m not talking about agents being fundamental- they aren’t- just that they’re tremendously useful components of certain approximations, like the wings of the airplane in a simulator.
Even if a concept isn’t fundamental, that doesn’t mean you should exclude it from every model. Check instead to see whether it pays rent.
My point isn’t that it’s a useless concept. It’s that it would be silly to consider it morally important.
You argued that a concept “isn’t fundamental”, because in principle it’s possible to construct things gradually escaping the current natural category, and therefore it’s morally unimportant. Can you give an example of a morally important category?
Sorry, but my moral valuations aren’t up for grabs. I’m not perfectly selfish, but neither am I perfectly altruistic; I care more about the welfare of agents more like me, and particularly about the welfare of agents who happen to remember having been me. That valuation has been drummed into my brain pretty thoroughly by evolution, and it may well survive in any extrapolation.
But at this point, I think we’ve passed the productive stage of this particular discussion.
like memory?
There is nothing morally important about remembering being someone. There’s no reason there has to be the same probability of being you and being one of the people you remember being. Memory exists, but it’s not relevant.
Read The Anthropic Trilemma. I agree with the third horn.
I find this odd because it sounds like the exact opposite of the patternist view of identity, where memory is all that is relevant.
Would you not mind then if some process erased all of your memories? Or replaced them completely with the memories of someone else?
It’s the lack of the patternist view of identity. I have no view of identity, so I disagree.
It would be likely to cause problems, but beyond that, no. I don’t see why losing your memory would be intrinsically bad.
I think the main thing I’m against is that any of this is fundamental enough to have any effect on anthropics. Erasing your memory and replacing it with someone else’s who’s still alive won’t make it half as likely to be you, just because there’s only a 50% chance of going from past him to you. Erasing your memory every day won’t make it tens of thousands of times as likely to be one of them, on the basis that now you’re tens of thousands of people.
You could, in principle, have memory mentioned in your utility function, but it’s not like it’s the end of the world if someone dies. I mean that in the sense that existance ceases for them or something like that. You could still consider it bad enough to warrant the phrase “it’s like the end of the world”.
Don’t know if I would call a mind-state a person, persons usually respond to things, think and so on, a mind state can’t do any of that. It’s somewhat like saying “a movie is made of little separate movies” when it’s actually separate frames. And death implies the end of a person not a mind-state. It might be a bit silly of me to make all this fuss about definitions but it’s already a quite messy subject, let’s not make it any messier.
Fine, there’s no fundamental connection between separate mind-states. Personhood can be defined (mostly), but it’s not fundamentally important whether or not two given mind-states are connected by a person. All that matters is the mind-states, whether you’re talking about morality or anthropics.
All this is of course very speculative but couldn’t you just reduce mind-states into sub-mind-states? If you look at split brain patients, where you have cut off corpus callosum, the two hemispheres behave/report in some situations as if they were two different people, it seems (at least to me) that there does not seem to be such irreducible quanta as “brain-states” either. My point is that you could make the same argument:
It’s not fundamentally important whether or not two given sub-mind-states are connected by a mind state. All that matters is the sub-mind-states.
It seems to me that my qualia are all experienced together, or at least the ones that I’m aware of. As such, there is more than just sub-mind-states. There is a fundamental difference. For what it’s worth, I don’t consider this difference morally relevant, but it’s there.