Whatever the correct answer is, the first step towards it has to be to taboo words like “experience” in sentences like, “But if I make two copies of the same computer program, is there twice as much experience, or only the same experience?”
What making copies is, is creating multiple instances of the same pattern. If you make two copies of a pattern, there are twice as many instances but only one pattern, obviously.
Are there, then, two of ‘you’? Depends what you mean by ‘you’. Has the weight of experience increased? Depends what you mean by ‘experience’. Think in terms of patterns and instances of patterns, and these questions become trivial.
I feel a bit strange having to explain this to Eliezer Yudkowsky, of all people.
Are there, then, two of ‘you’? Depends what you mean by ‘you’.
Can I redefine what I mean by “me” and thereby expect that I will win the lottery? Can I anticipate seeing “You Win” when I open my eyes? It still seems to me that expectation exists at a level where I cannot control it quite so freely, even by modifying my utility function. Perhaps I am mistaken.
I think the conflict is resolved by backing up to the point where you say that multiple copies of yourself count as more subjective experience weight (and therefore a higher chance of experiencing).
But if I make two copies of the same computer program, is there twice as much experience, or only the same experience? Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?
Let’s suppose that three copies get three times as much experience. (If not, then, in a Big universe, large enough that at least one copy of anything exists somewhere, you run into the Boltzmann Brain problem.)
I have a top-level post partly written up where I attempt to reduce “subjective experience” and show why your reductio about the Boltzmann Brain doesn’t follow, but here’s a summary of my reasoning:
Subjective experience appears to require a few components: first, forming mutual information with its space/time environment. Second, forming M/I with its past states, though of course not perfectly.
Now, look at the third trilemma horn: Britney Spears’s mind does not have M/I with your past memories. So it is flat-out incoherent to speak of “you” bouncing between different people: the chain of mutual information (your memories) is your subjective experience. This puts you in the position of having to say that “I know everything about the universe’s state, but I also must posit a causally-impotent thing called the ‘I’ of Silas Barta.”—which is an endorsement of epiphenominalism.
Now, look back at the case of copying yourself: these copies retain mutual information with each other. They have each other’s exact memory. They are experiencing (by stipulation) the same inputs. So they have a total of one being’s subjective experience, and only count once. From the perspective of some computer that runs the universe, it does not need additional data to store each copy, but rather, just the first.
The reason the Boltzmann Brain scenario doesn’t follow is this: while each copy knows the output of a copy, they would still not have mutual information with the far-off Big Universe copy, because they don’t know where it is! In the same way, a wall’s random molecual motions do not have a copy of me, even though, under some interpretation, they will emulate me at some point.
I see! So you’re identifying the number of copies with the number of causally distinct copies—distinct in the causality of a physical process. So copying on a computer does not produce distinct people, but spontaneous production in a distant galaxy does. Thus real people would outweigh Boltzmann brains.
But what about causally distinct processes that split, see different tiny details, and then merge via forgetting?
(Still, this idea does seem to me like progress! Like we could get a bit closer to the “magical rightness” of the Born rules this way.)
Actually, let me revise that: I made it more complicated than it needs to be. Unless I’m missing something (and this does seem too simple), you can easily resolve the dilemma this way:
Copying your upload self does multiply your identities but adds nothing to your anticipated probabilities that stem from quantum branching.
So here’s what you should expect:
-There’s still a 1 in a billion chance of experiencing winning the lottery.
-In the event you win the lottery, you will also experience being among a trillion copies of yourself, each of whom also have this experience. Note the critical point: since they all wake up in the same Everett branch, their subjective experience does not get counted in at the same “level” as the experience of the lottery loser.
-If you merge after winning the lottery you should expect, after the merge, to remember winning the lottery, and some random additional data that came from the different experiences the different copies had.
-This sums to: ~100% chance of losing the lottery, 1 in a billion chance of winning the lottery plus forgetting a few details.
-Regarding the implications of self-copying in general: Each copy (or original or instantiation or whatever—I’ll just say “copy” for brevity) feels just like you. Depending on how the process was actually carried out, the group of you could trace back which one was the source, and which one’s algorithm was instilled into an empty shell. If the process was carried out while you were asleep, you should assign an equal probability of being any given copy.
After the copy, your memories diverge and you have different identities. Merging combines the post-split memories into one person and then deletes such memories until you’re left with as much subjective time-history as if you one person the whole time, meaning you forget most of what happened in any given copy—kind of like the memory you have of your dreams when you wake up.
Yeah I get into trouble there. It feels as though two identical copies of a person = 1 pattern = no more people than before copying. But flip one bit and do you suddenly have two people? Can’t be right.
That said, the reason we value each person is because of their individuality. The more different two minds, the closer they are to two separate people? Erk.
But flip one bit and do you suddenly have two people? Can’t be right.
Why not? Imagine that bit is the memory/knowledge of which copy they are. After the copying, each copy naturally is curious what happened, and recall that bit. Now, if you had 1 person appearing in 2 places, it should be that every thought would be identical, right? Yet one copy will think ‘1!‘; the other will think ‘0!’. As 1 != 0, this is a contradiction.
Not enough of a contradiction? Imagine further that the original had resolved to start thinking about hot sexy Playboy pinups if it was 1, but to think about all his childhood sins if 0. Or he decides quite arbitrarily to become a Sufi Muslim if 0, and a Mennonite if 1. Or… (insert arbitrarily complex mental processes contingent on that bit).
At some point you will surely admit that we now have 2 people and not just 1; but the only justifiable step at which to say they are 2 and not 1 is the first difference.
At some point you will surely admit that we now have 2 people and not just 1
Actually I won’t. While I grok your approach completely, I’d rather say my concept of ‘an individual’ breaks down once I have two minds with one bit’s difference, or two identical minds, or any of these borderline cases we’re so fond of.
Say I have two optimisers with one bit’s difference. If that bit means one copy converts to Sufism and the other to Mennonism, then sure, two different people. If that one bit is swallowed up in later neural computations due to the coarse-grained-ness of the wetware, then we’re back to one person since the two are, once again, functionally identical. Faced with contradictions like that, I’m expecting our idea of personal identity to go out the window pretty fast once tech like this actually arrives. Greg Egan’s Diaspora pretty much nails this for me, have a look.
All your ‘contradictions’ go out the window once you let go of the idea of a mind as an indivisible unit. If our concept of identity is to have any value (and it really has to) then we need to learn to think more like reality, which doesn’t care about things like ‘one bit’s difference’.
If that one bit is swallowed up in later neural computations due to the coarse-grained-ness of the wetware, then we’re back to one person since the two are, once again, functionally identical.
Ack. So if I understand you right, your alternative to bit-for-bit identity is to loosen it to some sort of future similarity, which can depend on future actions and outcomes; or in other words, there’s a radical indeterminacy about even the minds in our example: are they same or are they different, who knows, it depends on whether the Sufism comes out in the wash! Ask me later; but then again, even then I won’t be sure whether those 2 were the same when we started them running (always in motion the future is).
That seems like quite a bullet to bite, and I wonder whether you can hold to any meaningful ‘individual’, whether the difference be bit-wise or no. Even 2 distant non-borderline mindsmight grow into each other.
I wonder whether you can hold to any meaningful ‘individual’, whether the difference be bit-wise or no.
Indeed, that’s what I’m driving at.
Harking back to my earlier comment, changing a single bit and suddenly having a whole new person is where my problem arises. If you change that bit back, are you back to one person? I might not be thinking hard enough, but my intuition doesn’t accept that. With that in mind, I prefer to bite that bullet than talk about degrees of person-hood.
If you change that bit back, are you back to one person? I might not be thinking hard enough, but my intuition doesn’t accept that.
Here’s an intuition for you: you take the number 5 and add 1 to it; then you subtract 1 from it; don’t you have what you started with?
With that in mind, I prefer to bite that bullet than talk about degrees of person-hood.
Well, I can’t really argue with that. As long as you realize you’re biting that bullet, I think we’re still in a situation where it’s just dueling intuitions. (Your intuition says one thing, mine another.)
What if I hack & remove $100 from your bank account. Are you just as wealthy as you were before, because you haven’t looked? If the 2 copies simply haven’t looked or otherwise are still unaware, that doesn’t mean they are the same. Their possible futures diverge.
And, sure, it’s possible they might never realize—we could merge them back before they notice, just as I could restore the money before the next time you checked, but I think we would agree that I still committed a crime (theft) with your money; why couldn’t we feel that there was a crime (murder) in the merging?
Huh? My point is a bitflip in a non conscious part, before it influences any of the conscious processing, well, if prior to that bit flip you would have said there was only one being, then I’d say after that they’d still not yet diverged. Or at least, not entirely.
As far as a merging, well, in that case who, precisely, is the one that’s being killed?
So only anything in immediate consciousness counts? Fine, let’s remove all of the long-term memories of one of the copies—after all, he’s not thinking about his childhood...
As far as a merging, well, in that case who, precisely, is the one that’s being killed?
Obviously whichever one isn’t there afterwards; if the bit is 1, then 0 got killed off & vice versa. If we erase both copies and replace with the original, then both were killed.
I’d have to say that IF two (equivalent) instances of a mind count as “one mind”, then removing an unaccessed data store does not change that for the duration that the effect of the removal doesn’t propagate directly or indirectly to the conscious bits.
If one then restores that data store before anything was noticed regarding it being missing, then, conditional on the assumption that IF the two instances originally only counted as one being, then.… so they remain.
EDIT: to clarify, though… my overall issue here is that I think we may be effectively implicitly treating conscious agents as irreducible entities. If we’re ever going to find an actual proper reduction of consciousness, well, we probably need to ask ourselves stuff like “what if two agents are bit for bit identical… except for these couple of bits here? What if they were completely identical? Is the couple bit difference enough that they might as well be completely different?” etc...
I think I’d have to say still “Nothing of significance happened until memory access occurs”
Until then, well, how’s it any different then stealing your books… and then replacing them before you notice?
Now, as I said, we probably ought be asking questions like “what if in the actual “conscious processing” part of the agent, a few bits were changed in one instance… but just that… so initially, before it propagates enough to completely diverge… what should we say? To say it completely changes everything instantly, well… that seems too much like saying “conscious agents are irreducible”, so...
(just to clarify: I’m more laying out a bit of my confusion here rather than anything else, plus noting that we seem to have been, in our quest to find reductions for aspects of consciousness, implicitly treating agents as irreducible in certain ways)
(just to clarify: I’m more laying out a bit of my confusion here rather than anything else, plus noting that we seem to have been, in our quest to find reductions for aspects of consciousness, implicitly treating agents as irreducible in certain ways)
Indeed. It’s not obvious what we can reduce agents down further into without losing agents entirely; bit-for-bit identity is at least clear in a few situations.
(To continue the example—if we see the unaccessed memory as being part of the agent, then obviously we can’t mess with it without changing the agent; but if we intuitively see it as like the agent having Internet access and the memory being a webpage, then we wouldn’t regard it as part of its identity.)
What if I hack & remove $100 from your bank account. Are you just as wealthy as you were before, because you haven’t looked?
Standard Dispute. If wealthy = same amount of money in the account, no. If wealthy = how rich you judge yourself to be. The fact that ‘futures diverge’ is irrelevant up until the moment those two different pieces of information have causal contact with the brain. Until that point, yes, they are ’the same
I don’t know; I’m still working through the formalism and drawing causal networks. And I just realized I should probably re-assimilate all the material in your Timeless Identity post, to see the relationship between identity and subjective experience. My brain hurts.
For now, let me just mention that I was trying to do something similar to what you did when identifying what d-connects the output of a calculator on Mars and Venus doing the same calculation. There’s an (imperfect) analog to that, if you imagine a program “causing” its two copies, which each then get different input. They can still make inferences about each other despite being d-separated by knowledge of their pre-fork state. The next step is to see how this mutual information relates to the kind between one sentient program’s subsequent states.
And, for bonus points, make sure to eliminate time by using the thermodynamic arrow and watch the entropy gain from copying a program.
Heh, maybe you just had read more insight into my other comment than there actually was. Let me try to rephrase the last:
I’m starting from the perspective of viewing subjective experience as something that forms mutual information with its space/time surroundings, and with its past states (and has some other attributes I’ll add later). This means that identifying which experience you will have in the future is a matter of finding which bodies have mutual information with which.
M/I can be identified by spotting inferences in a Bayesian causal network. So what would a network look like that has a sentient program being copied? You’d show the initial program as being the parent of two identical programs. But, as sentient programs with subjective experience, they remember (most of) their state before the split. This knowledge has implications for what inferences one of them can make about the other, and therefore how much mutual information they will have, which in turn has implications for how their subjective experiences are linked.
My final sentence was noting the importance of checking the thermodynamic constraints on the processes going on, and the related issue, of making time removable from the model. So, I suggested that instead of phrasing questions about “previous/future times”, you should phrase such questions as being about “when the universe had lower/higher total entropy”. This will have implications for what the sentience will regard as “its past”.
Furthermore, the entropy calculation is affected by copy (and merge) operations. Copying involves deleting to make room for the new copies, whereas merging throws away information if the copies aren’t identical.
Now, does that make it any clearer, or does it just make it look like you overestimated my first post?
Can I redefine what I mean by “me” and thereby expect that I will win the lottery?
Yes? Obviously? You can go around redefining anything as anything. You can redefine a ham sandwich as a steel I-beam and thereby expect that a ham sandwich can support hundreds of pounds of force. The problem is that in that case you lose the property of ham sandwiches that says they are delicious.
In the case of redefining you as someone who wins the lottery, the property you are likely to lose is the property of generating warm fuzzy feelings of identification inside Eliezer Yudkowsky.
Words are just labels, but in order to be able to converse at all, we have to hold at least most of them in one place while we play with the remainder. We should try to avoid emulating Humpty Dumpty. Someone who calls a tail a leg is either trying to add to the category originally described by “leg” (turning it into the category now identified with “extremity” or something like that), or is appropriating a word (“leg”) for a category that already has a word (“tail”). The first exercise can be useful in some contexts, but typically these contexts start with somebody saying “Let’s evaluate the content of the word “leg” and maybe revise it for consistency.” The second is juvenile code invention.
Someone who calls a tail a leg is either trying to add to the category originally described by “leg” (turning it into the category now identified with “extremity” or something like that), or is appropriating a word (“leg”) for a category that already has a word (“tail”). The first exercise can be useful in some contexts, but typically these contexts start with somebody saying “Let’s evaluate the content of the word “leg” and maybe revise it for consistency.” The second is juvenile code invention.
What about if evolution repurposed some genus’s tail to function as a leg? The question wouldn’t be so juvenile or academic then. And before you roll your eyes, I can imagine someone saying,
“How many limbs does a mammal have, if you count the nose as a limb? Four. Calling a nose a limb doesn’t make it one.”
And then realizing they forgot about elephants, whose trunks have muscles that allow it to grip things as if it had a hand.
That looks like category reevaluation, not code-making, to me. If you think an elephant’s trunk should be called a limb, and you think that elephants have five limbs, that’s category reevaluation; if you think that elephant trunks should be called limbs and elephants have one limb, that’s code.
Speakers Use Their Actual Language, so someone who uses ‘leg’ to mean leg or tail speaks truly when they say ‘dogs have five legs.’ But it remains the case that dogs have only four legs, and nobody can reasonably expect a ham sandwich to support hundreds of pounds of force. This is because the previous sentence uses English, not the counterfactual language we’ve been invited to imagine.
Whatever the correct answer is, the first step towards it has to be to taboo words like “experience” in sentences like, “But if I make two copies of the same computer program, is there twice as much experience, or only the same experience?”
What making copies is, is creating multiple instances of the same pattern. If you make two copies of a pattern, there are twice as many instances but only one pattern, obviously.
Are there, then, two of ‘you’? Depends what you mean by ‘you’. Has the weight of experience increased? Depends what you mean by ‘experience’. Think in terms of patterns and instances of patterns, and these questions become trivial.
I feel a bit strange having to explain this to Eliezer Yudkowsky, of all people.
Can I redefine what I mean by “me” and thereby expect that I will win the lottery? Can I anticipate seeing “You Win” when I open my eyes? It still seems to me that expectation exists at a level where I cannot control it quite so freely, even by modifying my utility function. Perhaps I am mistaken.
I think the conflict is resolved by backing up to the point where you say that multiple copies of yourself count as more subjective experience weight (and therefore a higher chance of experiencing).
I have a top-level post partly written up where I attempt to reduce “subjective experience” and show why your reductio about the Boltzmann Brain doesn’t follow, but here’s a summary of my reasoning:
Subjective experience appears to require a few components: first, forming mutual information with its space/time environment. Second, forming M/I with its past states, though of course not perfectly.
Now, look at the third trilemma horn: Britney Spears’s mind does not have M/I with your past memories. So it is flat-out incoherent to speak of “you” bouncing between different people: the chain of mutual information (your memories) is your subjective experience. This puts you in the position of having to say that “I know everything about the universe’s state, but I also must posit a causally-impotent thing called the ‘I’ of Silas Barta.”—which is an endorsement of epiphenominalism.
Now, look back at the case of copying yourself: these copies retain mutual information with each other. They have each other’s exact memory. They are experiencing (by stipulation) the same inputs. So they have a total of one being’s subjective experience, and only count once. From the perspective of some computer that runs the universe, it does not need additional data to store each copy, but rather, just the first.
The reason the Boltzmann Brain scenario doesn’t follow is this: while each copy knows the output of a copy, they would still not have mutual information with the far-off Big Universe copy, because they don’t know where it is! In the same way, a wall’s random molecual motions do not have a copy of me, even though, under some interpretation, they will emulate me at some point.
I see! So you’re identifying the number of copies with the number of causally distinct copies—distinct in the causality of a physical process. So copying on a computer does not produce distinct people, but spontaneous production in a distant galaxy does. Thus real people would outweigh Boltzmann brains.
But what about causally distinct processes that split, see different tiny details, and then merge via forgetting?
(Still, this idea does seem to me like progress! Like we could get a bit closer to the “magical rightness” of the Born rules this way.)
Actually, let me revise that: I made it more complicated than it needs to be. Unless I’m missing something (and this does seem too simple), you can easily resolve the dilemma this way:
Copying your upload self does multiply your identities but adds nothing to your anticipated probabilities that stem from quantum branching.
So here’s what you should expect:
-There’s still a 1 in a billion chance of experiencing winning the lottery.
-In the event you win the lottery, you will also experience being among a trillion copies of yourself, each of whom also have this experience. Note the critical point: since they all wake up in the same Everett branch, their subjective experience does not get counted in at the same “level” as the experience of the lottery loser.
-If you merge after winning the lottery you should expect, after the merge, to remember winning the lottery, and some random additional data that came from the different experiences the different copies had.
-This sums to: ~100% chance of losing the lottery, 1 in a billion chance of winning the lottery plus forgetting a few details.
-Regarding the implications of self-copying in general: Each copy (or original or instantiation or whatever—I’ll just say “copy” for brevity) feels just like you. Depending on how the process was actually carried out, the group of you could trace back which one was the source, and which one’s algorithm was instilled into an empty shell. If the process was carried out while you were asleep, you should assign an equal probability of being any given copy.
After the copy, your memories diverge and you have different identities. Merging combines the post-split memories into one person and then deletes such memories until you’re left with as much subjective time-history as if you one person the whole time, meaning you forget most of what happened in any given copy—kind of like the memory you have of your dreams when you wake up.
Yeah I get into trouble there. It feels as though two identical copies of a person = 1 pattern = no more people than before copying. But flip one bit and do you suddenly have two people? Can’t be right.
That said, the reason we value each person is because of their individuality. The more different two minds, the closer they are to two separate people? Erk.
Silas, looking forward to that post.
Why not? Imagine that bit is the memory/knowledge of which copy they are. After the copying, each copy naturally is curious what happened, and recall that bit. Now, if you had 1 person appearing in 2 places, it should be that every thought would be identical, right? Yet one copy will think ‘1!‘; the other will think ‘0!’. As 1 != 0, this is a contradiction.
Not enough of a contradiction? Imagine further that the original had resolved to start thinking about hot sexy Playboy pinups if it was 1, but to think about all his childhood sins if 0. Or he decides quite arbitrarily to become a Sufi Muslim if 0, and a Mennonite if 1. Or… (insert arbitrarily complex mental processes contingent on that bit).
At some point you will surely admit that we now have 2 people and not just 1; but the only justifiable step at which to say they are 2 and not 1 is the first difference.
Actually I won’t. While I grok your approach completely, I’d rather say my concept of ‘an individual’ breaks down once I have two minds with one bit’s difference, or two identical minds, or any of these borderline cases we’re so fond of.
Say I have two optimisers with one bit’s difference. If that bit means one copy converts to Sufism and the other to Mennonism, then sure, two different people. If that one bit is swallowed up in later neural computations due to the coarse-grained-ness of the wetware, then we’re back to one person since the two are, once again, functionally identical. Faced with contradictions like that, I’m expecting our idea of personal identity to go out the window pretty fast once tech like this actually arrives. Greg Egan’s Diaspora pretty much nails this for me, have a look.
All your ‘contradictions’ go out the window once you let go of the idea of a mind as an indivisible unit. If our concept of identity is to have any value (and it really has to) then we need to learn to think more like reality, which doesn’t care about things like ‘one bit’s difference’.
Ack. So if I understand you right, your alternative to bit-for-bit identity is to loosen it to some sort of future similarity, which can depend on future actions and outcomes; or in other words, there’s a radical indeterminacy about even the minds in our example: are they same or are they different, who knows, it depends on whether the Sufism comes out in the wash! Ask me later; but then again, even then I won’t be sure whether those 2 were the same when we started them running (always in motion the future is).
That seems like quite a bullet to bite, and I wonder whether you can hold to any meaningful ‘individual’, whether the difference be bit-wise or no. Even 2 distant non-borderline mindsmight grow into each other.
Indeed, that’s what I’m driving at.
Harking back to my earlier comment, changing a single bit and suddenly having a whole new person is where my problem arises. If you change that bit back, are you back to one person? I might not be thinking hard enough, but my intuition doesn’t accept that. With that in mind, I prefer to bite that bullet than talk about degrees of person-hood.
Here’s an intuition for you: you take the number 5 and add 1 to it; then you subtract 1 from it; don’t you have what you started with?
Well, I can’t really argue with that. As long as you realize you’re biting that bullet, I think we’re still in a situation where it’s just dueling intuitions. (Your intuition says one thing, mine another.)
The downside is that it’s not really that reductionistic.
What if you flip a bit in part of an offline memory store that you’re not consciously thinking about at the time or such?
What if I hack & remove $100 from your bank account. Are you just as wealthy as you were before, because you haven’t looked? If the 2 copies simply haven’t looked or otherwise are still unaware, that doesn’t mean they are the same. Their possible futures diverge.
And, sure, it’s possible they might never realize—we could merge them back before they notice, just as I could restore the money before the next time you checked, but I think we would agree that I still committed a crime (theft) with your money; why couldn’t we feel that there was a crime (murder) in the merging?
Huh? My point is a bitflip in a non conscious part, before it influences any of the conscious processing, well, if prior to that bit flip you would have said there was only one being, then I’d say after that they’d still not yet diverged. Or at least, not entirely.
As far as a merging, well, in that case who, precisely, is the one that’s being killed?
So only anything in immediate consciousness counts? Fine, let’s remove all of the long-term memories of one of the copies—after all, he’s not thinking about his childhood...
Obviously whichever one isn’t there afterwards; if the bit is 1, then 0 got killed off & vice versa. If we erase both copies and replace with the original, then both were killed.
I’d have to say that IF two (equivalent) instances of a mind count as “one mind”, then removing an unaccessed data store does not change that for the duration that the effect of the removal doesn’t propagate directly or indirectly to the conscious bits.
If one then restores that data store before anything was noticed regarding it being missing, then, conditional on the assumption that IF the two instances originally only counted as one being, then.… so they remain.
EDIT: to clarify, though… my overall issue here is that I think we may be effectively implicitly treating conscious agents as irreducible entities. If we’re ever going to find an actual proper reduction of consciousness, well, we probably need to ask ourselves stuff like “what if two agents are bit for bit identical… except for these couple of bits here? What if they were completely identical? Is the couple bit difference enough that they might as well be completely different?” etc...
And if we restore a different long-term memory instead?
I think I’d have to say still “Nothing of significance happened until memory access occurs”
Until then, well, how’s it any different then stealing your books… and then replacing them before you notice?
Now, as I said, we probably ought be asking questions like “what if in the actual “conscious processing” part of the agent, a few bits were changed in one instance… but just that… so initially, before it propagates enough to completely diverge… what should we say? To say it completely changes everything instantly, well… that seems too much like saying “conscious agents are irreducible”, so...
(just to clarify: I’m more laying out a bit of my confusion here rather than anything else, plus noting that we seem to have been, in our quest to find reductions for aspects of consciousness, implicitly treating agents as irreducible in certain ways)
Indeed. It’s not obvious what we can reduce agents down further into without losing agents entirely; bit-for-bit identity is at least clear in a few situations.
(To continue the example—if we see the unaccessed memory as being part of the agent, then obviously we can’t mess with it without changing the agent; but if we intuitively see it as like the agent having Internet access and the memory being a webpage, then we wouldn’t regard it as part of its identity.)
Standard Dispute. If wealthy = same amount of money in the account, no. If wealthy = how rich you judge yourself to be. The fact that ‘futures diverge’ is irrelevant up until the moment those two different pieces of information have causal contact with the brain. Until that point, yes, they are ’the same
I don’t know; I’m still working through the formalism and drawing causal networks. And I just realized I should probably re-assimilate all the material in your Timeless Identity post, to see the relationship between identity and subjective experience. My brain hurts.
For now, let me just mention that I was trying to do something similar to what you did when identifying what d-connects the output of a calculator on Mars and Venus doing the same calculation. There’s an (imperfect) analog to that, if you imagine a program “causing” its two copies, which each then get different input. They can still make inferences about each other despite being d-separated by knowledge of their pre-fork state. The next step is to see how this mutual information relates to the kind between one sentient program’s subsequent states.
And, for bonus points, make sure to eliminate time by using the thermodynamic arrow and watch the entropy gain from copying a program.
...okay, that part didn’t make any particular sense to me.
Heh, maybe you just had read more insight into my other comment than there actually was. Let me try to rephrase the last:
I’m starting from the perspective of viewing subjective experience as something that forms mutual information with its space/time surroundings, and with its past states (and has some other attributes I’ll add later). This means that identifying which experience you will have in the future is a matter of finding which bodies have mutual information with which.
M/I can be identified by spotting inferences in a Bayesian causal network. So what would a network look like that has a sentient program being copied? You’d show the initial program as being the parent of two identical programs. But, as sentient programs with subjective experience, they remember (most of) their state before the split. This knowledge has implications for what inferences one of them can make about the other, and therefore how much mutual information they will have, which in turn has implications for how their subjective experiences are linked.
My final sentence was noting the importance of checking the thermodynamic constraints on the processes going on, and the related issue, of making time removable from the model. So, I suggested that instead of phrasing questions about “previous/future times”, you should phrase such questions as being about “when the universe had lower/higher total entropy”. This will have implications for what the sentience will regard as “its past”.
Furthermore, the entropy calculation is affected by copy (and merge) operations. Copying involves deleting to make room for the new copies, whereas merging throws away information if the copies aren’t identical.
Now, does that make it any clearer, or does it just make it look like you overestimated my first post?
Yes? Obviously? You can go around redefining anything as anything. You can redefine a ham sandwich as a steel I-beam and thereby expect that a ham sandwich can support hundreds of pounds of force. The problem is that in that case you lose the property of ham sandwiches that says they are delicious.
In the case of redefining you as someone who wins the lottery, the property you are likely to lose is the property of generating warm fuzzy feelings of identification inside Eliezer Yudkowsky.
“If you call a tail a leg, how many legs does a dog have...? Four. Calling a tail a leg doesn’t make it one.”
That was said by someone who didn’t realize that words are just labels.
Words are just labels, but in order to be able to converse at all, we have to hold at least most of them in one place while we play with the remainder. We should try to avoid emulating Humpty Dumpty. Someone who calls a tail a leg is either trying to add to the category originally described by “leg” (turning it into the category now identified with “extremity” or something like that), or is appropriating a word (“leg”) for a category that already has a word (“tail”). The first exercise can be useful in some contexts, but typically these contexts start with somebody saying “Let’s evaluate the content of the word “leg” and maybe revise it for consistency.” The second is juvenile code invention.
What about if evolution repurposed some genus’s tail to function as a leg? The question wouldn’t be so juvenile or academic then. And before you roll your eyes, I can imagine someone saying,
“How many limbs does a mammal have, if you count the nose as a limb? Four. Calling a nose a limb doesn’t make it one.”
And then realizing they forgot about elephants, whose trunks have muscles that allow it to grip things as if it had a hand.
That looks like category reevaluation, not code-making, to me. If you think an elephant’s trunk should be called a limb, and you think that elephants have five limbs, that’s category reevaluation; if you think that elephant trunks should be called limbs and elephants have one limb, that’s code.
Speakers Use Their Actual Language, so someone who uses ‘leg’ to mean leg or tail speaks truly when they say ‘dogs have five legs.’ But it remains the case that dogs have only four legs, and nobody can reasonably expect a ham sandwich to support hundreds of pounds of force. This is because the previous sentence uses English, not the counterfactual language we’ve been invited to imagine.