I don’t know; I’m still working through the formalism and drawing causal networks. And I just realized I should probably re-assimilate all the material in your Timeless Identity post, to see the relationship between identity and subjective experience. My brain hurts.
For now, let me just mention that I was trying to do something similar to what you did when identifying what d-connects the output of a calculator on Mars and Venus doing the same calculation. There’s an (imperfect) analog to that, if you imagine a program “causing” its two copies, which each then get different input. They can still make inferences about each other despite being d-separated by knowledge of their pre-fork state. The next step is to see how this mutual information relates to the kind between one sentient program’s subsequent states.
And, for bonus points, make sure to eliminate time by using the thermodynamic arrow and watch the entropy gain from copying a program.
Heh, maybe you just had read more insight into my other comment than there actually was. Let me try to rephrase the last:
I’m starting from the perspective of viewing subjective experience as something that forms mutual information with its space/time surroundings, and with its past states (and has some other attributes I’ll add later). This means that identifying which experience you will have in the future is a matter of finding which bodies have mutual information with which.
M/I can be identified by spotting inferences in a Bayesian causal network. So what would a network look like that has a sentient program being copied? You’d show the initial program as being the parent of two identical programs. But, as sentient programs with subjective experience, they remember (most of) their state before the split. This knowledge has implications for what inferences one of them can make about the other, and therefore how much mutual information they will have, which in turn has implications for how their subjective experiences are linked.
My final sentence was noting the importance of checking the thermodynamic constraints on the processes going on, and the related issue, of making time removable from the model. So, I suggested that instead of phrasing questions about “previous/future times”, you should phrase such questions as being about “when the universe had lower/higher total entropy”. This will have implications for what the sentience will regard as “its past”.
Furthermore, the entropy calculation is affected by copy (and merge) operations. Copying involves deleting to make room for the new copies, whereas merging throws away information if the copies aren’t identical.
Now, does that make it any clearer, or does it just make it look like you overestimated my first post?
I don’t know; I’m still working through the formalism and drawing causal networks. And I just realized I should probably re-assimilate all the material in your Timeless Identity post, to see the relationship between identity and subjective experience. My brain hurts.
For now, let me just mention that I was trying to do something similar to what you did when identifying what d-connects the output of a calculator on Mars and Venus doing the same calculation. There’s an (imperfect) analog to that, if you imagine a program “causing” its two copies, which each then get different input. They can still make inferences about each other despite being d-separated by knowledge of their pre-fork state. The next step is to see how this mutual information relates to the kind between one sentient program’s subsequent states.
And, for bonus points, make sure to eliminate time by using the thermodynamic arrow and watch the entropy gain from copying a program.
...okay, that part didn’t make any particular sense to me.
Heh, maybe you just had read more insight into my other comment than there actually was. Let me try to rephrase the last:
I’m starting from the perspective of viewing subjective experience as something that forms mutual information with its space/time surroundings, and with its past states (and has some other attributes I’ll add later). This means that identifying which experience you will have in the future is a matter of finding which bodies have mutual information with which.
M/I can be identified by spotting inferences in a Bayesian causal network. So what would a network look like that has a sentient program being copied? You’d show the initial program as being the parent of two identical programs. But, as sentient programs with subjective experience, they remember (most of) their state before the split. This knowledge has implications for what inferences one of them can make about the other, and therefore how much mutual information they will have, which in turn has implications for how their subjective experiences are linked.
My final sentence was noting the importance of checking the thermodynamic constraints on the processes going on, and the related issue, of making time removable from the model. So, I suggested that instead of phrasing questions about “previous/future times”, you should phrase such questions as being about “when the universe had lower/higher total entropy”. This will have implications for what the sentience will regard as “its past”.
Furthermore, the entropy calculation is affected by copy (and merge) operations. Copying involves deleting to make room for the new copies, whereas merging throws away information if the copies aren’t identical.
Now, does that make it any clearer, or does it just make it look like you overestimated my first post?