At present, when mind-copying technology doesn’t exist, there’s an extremely strong connection exhibited by the mind-states that occupy a given cranium at different times, much stronger than that exhibited by any two mind-states that occupy different crania. (This shouldn’t be taken naively- I and my past self might disagree on many propositions that my current self and you would agree on- but there’s still an architectural commonality between my present and past mind-states, that’s unmistakably stronger than that between mine and yours.)
Essentially, grouping together mind-states into agents in this way carves reality at its proper joints, especially for purposes of deciding on actions now that will satisfy my current goals for future world-states.
Essentially, grouping together mind-states into agents in this way carves reality at its proper joints
So does specifying rubes and bleggs. This is what I mean by there being nothing fundamentally separating them. It might matter whether it’s red or blue, or whether it’s a cube or an egg, but it can’t possibly matter whether it’s a rube or a blegg, because it isn’t a rube or a blegg.
At present, there aren’t any truly intermediate cases, so “agents with an identity over time” are useful concepts to include in our models; if all red objects in a domain are cubic and contain vanadium, “rube” becomes a useful concept.
In futures where mind-copying and mind-engineering become plentiful, this regularity will no longer be the case, and our decision theories will need to incorporate more exotic kinds of “agents” in order to be successful. I’m not talking about agents being fundamental- they aren’t- just that they’re tremendously useful components of certain approximations, like the wings of the airplane in a simulator.
Even if a concept isn’t fundamental, that doesn’t mean you should exclude it from every model. Check instead to see whether it pays rent.
You argued that a concept “isn’t fundamental”, because in principle it’s possible to construct things gradually escaping the current natural category, and therefore it’s morally unimportant. Can you give an example of a morally important category?
Sorry, but my moral valuations aren’t up for grabs. I’m not perfectly selfish, but neither am I perfectly altruistic; I care more about the welfare of agents more like me, and particularly about the welfare of agents who happen to remember having been me. That valuation has been drummed into my brain pretty thoroughly by evolution, and it may well survive in any extrapolation.
But at this point, I think we’ve passed the productive stage of this particular discussion.
At present, when mind-copying technology doesn’t exist, there’s an extremely strong connection exhibited by the mind-states that occupy a given cranium at different times, much stronger than that exhibited by any two mind-states that occupy different crania. (This shouldn’t be taken naively- I and my past self might disagree on many propositions that my current self and you would agree on- but there’s still an architectural commonality between my present and past mind-states, that’s unmistakably stronger than that between mine and yours.)
Essentially, grouping together mind-states into agents in this way carves reality at its proper joints, especially for purposes of deciding on actions now that will satisfy my current goals for future world-states.
So does specifying rubes and bleggs. This is what I mean by there being nothing fundamentally separating them. It might matter whether it’s red or blue, or whether it’s a cube or an egg, but it can’t possibly matter whether it’s a rube or a blegg, because it isn’t a rube or a blegg.
At present, there aren’t any truly intermediate cases, so “agents with an identity over time” are useful concepts to include in our models; if all red objects in a domain are cubic and contain vanadium, “rube” becomes a useful concept.
In futures where mind-copying and mind-engineering become plentiful, this regularity will no longer be the case, and our decision theories will need to incorporate more exotic kinds of “agents” in order to be successful. I’m not talking about agents being fundamental- they aren’t- just that they’re tremendously useful components of certain approximations, like the wings of the airplane in a simulator.
Even if a concept isn’t fundamental, that doesn’t mean you should exclude it from every model. Check instead to see whether it pays rent.
My point isn’t that it’s a useless concept. It’s that it would be silly to consider it morally important.
You argued that a concept “isn’t fundamental”, because in principle it’s possible to construct things gradually escaping the current natural category, and therefore it’s morally unimportant. Can you give an example of a morally important category?
Sorry, but my moral valuations aren’t up for grabs. I’m not perfectly selfish, but neither am I perfectly altruistic; I care more about the welfare of agents more like me, and particularly about the welfare of agents who happen to remember having been me. That valuation has been drummed into my brain pretty thoroughly by evolution, and it may well survive in any extrapolation.
But at this point, I think we’ve passed the productive stage of this particular discussion.