Assume you have already been copied and you know you are one of the software versions. (Some proof of this has been provided). What you don’t know is whether you are in a red ball simulation or a blue ball simulation. You do know that there are a lot of (identical—in the digital sense) red ball simulations and one blue ball simulation. My view on this is that you should presume yourself more likely to be in the red ball simulation.
Ah, this does more precisely address the issue. However, I don’t think it changes my inconclusive response. As my subjective experiences are still identical up until the ball is drawn, I don’t identify exclusively with either substrate and still anticipate a future where “I” experience both possibilities.
As each computer is going to be in a slightly different physical environment, it could be argued that this means that all the programs are different, even if the digital representation put into the box by the humans is the same.
If this is accepted, it seems to rule out the concept of identity altogether, except as excruciatingly defined over specific physical states, with no reliance on a more general principle.
The natural tendency of humans is just to to focus on the 1s and 0s—which is just a preferred interpretation.
Maybe sometimes, but not always. The digital interpretation can come into the picture if the mind in question is capable of observing a digital interpretation of its own substrate. This relies on the same sort of assumption as my previous example involving self-observability.
I just think that when we try to go for 50⁄50 (copies don’t count) we can get into a huge mess that a lot of people can miss. While I don’t think you agree with me, I think maybe you can see this mess.
I’m not sure if we’re thinking of the same mess. It seems to me the mess arises from the assumptions necessary to invoke probability, but I’m willing to be convinced of the validity of a probabilistic resolution.
If you know you are running on a billipn identical machines, and that 90% of them are about to be turned off then it could then become an important issue for you. It would make things very similar to what is regarded as “quantum suicide”.
They do seem similar. The major difference I see is that quantum suicide (or its dust analogue, Paul Durham running a lone copy and then shutting it down) produces near-certainty in the existence of an environment you once inhabited, but no longer do. Shutting down extra copies with identical subjective environments produces no similar outcome. The only difference it makes is that you can find fewer encodings of yourself in your environment.
The visitor scenario seems isomorphic to the red ball scenario. Both outcomes are guaranteed to occur.
I don’t know if I fully understood that—are you suggesting that a reclusive AI or uploaded brain simulation would not exist as a conscious entity?
No, I was pointing out the only example I could synthesize where substrate dependence made sense to me. A reclusive AI or isolated brain simulation by definition doesn’t have access to the environment containing its substrate, so I can’t see what substrate dependence even means for them.
In other words, if we take the view that every abstraction of any object physically exists as a definition of the idea of physical existence, it makes the existence of a physical reality mandatory.
I don’t think I followed this. Doesn’t any definition of the idea of physical existence mandate a physical reality?
I simply take universal realizability at face value. That is my response to this kind of issue. It frees me totally from any concerns about consistency—and the use of measure even makes things statistically predictable.
I still don’t see where you get statistics out of universal realizability. It seems to imply that observers require arbitrary information about a system in order to interpret that system as performing a computation, but if the observers themselves are defined to be computations, the “universality” is at least constrained by the requirement for correlation (information) between the two computations. I admit I find this pretty confusing, I’ll read your article on interpretation.
Ah, this does more precisely address the issue. However, I don’t think it changes my inconclusive response. As my subjective experiences are still identical up until the ball is drawn, I don’t identify exclusively with either substrate and still anticipate a future where “I” experience both possibilities.
If this is accepted, it seems to rule out the concept of identity altogether, except as excruciatingly defined over specific physical states, with no reliance on a more general principle.
Maybe sometimes, but not always. The digital interpretation can come into the picture if the mind in question is capable of observing a digital interpretation of its own substrate. This relies on the same sort of assumption as my previous example involving self-observability.
I’m not sure if we’re thinking of the same mess. It seems to me the mess arises from the assumptions necessary to invoke probability, but I’m willing to be convinced of the validity of a probabilistic resolution.
They do seem similar. The major difference I see is that quantum suicide (or its dust analogue, Paul Durham running a lone copy and then shutting it down) produces near-certainty in the existence of an environment you once inhabited, but no longer do. Shutting down extra copies with identical subjective environments produces no similar outcome. The only difference it makes is that you can find fewer encodings of yourself in your environment.
The visitor scenario seems isomorphic to the red ball scenario. Both outcomes are guaranteed to occur.
No, I was pointing out the only example I could synthesize where substrate dependence made sense to me. A reclusive AI or isolated brain simulation by definition doesn’t have access to the environment containing its substrate, so I can’t see what substrate dependence even means for them.
I don’t think I followed this. Doesn’t any definition of the idea of physical existence mandate a physical reality?
I still don’t see where you get statistics out of universal realizability. It seems to imply that observers require arbitrary information about a system in order to interpret that system as performing a computation, but if the observers themselves are defined to be computations, the “universality” is at least constrained by the requirement for correlation (information) between the two computations. I admit I find this pretty confusing, I’ll read your article on interpretation.