What reason is there to suspect that a simulated me would have a different/distinguishable experience from real me?
As someone who has written lots of simulations, there are a few reasons.
1) The simulation deliberately simplifies or changes some things from reality. At minimum, when “noise” is required an algorithm is used to generate numbers which have many of the properties of random numbers but
a) are not in fact random,
b) are usually much more accurately described by a particular mathematical distribution than would any measurements of the actual noise in the system be.
2) The simulation accidentally simplifies/changes LOTS of things from reality. A brain simulation at the neuron level is likely to simulate observed variations using a noise generator, when these variations arise from a) a ream of detailed motions of individual ions and b) quantum interactions. The claim is generally made that one can simulate at a more and more detailed level AND GET TO THE ENDPOINT where the simulation is “perfect.” The getting to the endpoint claim is not only unproven, but highly suspect. At every level of physics we have investigated so far, we have always found a deeper level. Further, the equations of motions at these deepest layers are not known in complete detail. So even if we can get to an endpoint, we have no reason to believe we have gotten to the endpoint in any given simulation. At some point, we are no longer compute bound, we are knowledge bound.
3) There is a great insight in software that “if it isn’t tested, its broken.” How do you even test a supremely deep simulation of yourself? If there are features of yourself you are still learning, you can’t test for them. Until you comprehensievly comprehend yourself, you can never know that a simulation was comprehensively similar.
Even something as simple as a coin toss simulation is likely to be “wrong” in detail. Perhaps you know the coin toss you are actually simulating has .500 or even .500000000 probability of giving heads (where number of zeros represents accuracy to which you know it.) But what is your confidence that the true expectation is 0.5 with a googleplex zeros following (or 3^^3 zeros to pretend to try to fit in here) is the experimental fact? Even 64 zeros would be a bitch to prove. And what are the chances that your simulation gets a ’true expectation” of 0.5 with even 64 zeros after it? With the coin toss, the variance might SEEM trivial, but consider the same uncertainty in the human. You need to predict my next post keystroke for keystroke, which necessarily includes a prediction of whether I will eat an egg for breakfast or a bowel of cereal because the posts I read while eating depend on that. And so on and so on.
My claim is that the existence of an endpoint in finally getting the simulation complete is at best vastly beyond our knowledge (and not in a compute bound way) and at worst simply unknowable for a ream of good reasons. My estimate of the probability that a simulation will ever be reliably known is < 0.01%.
Now we may get to a much easier place: good enough to convince others. That someone can write a simulation of me that cannot be distinguished from me by people who know me is a much lower bar than that the simulatino feels the same as me to itself. To convince others, the simulation may not even have to be conscious, for example. But you are going to have to build your simulation in to a fat human body good enough to fool my wife, and give it a variety of nervous and personality disorders that cause it to come up with digs that are deeply disturbing to her to do even that.
At some point, the comprehensive difficulty of a problem has to open the question: is it reasonable to sweep this under the rug by appealing to an unknown future of much greater capability than we have now, or is doing that a human bias we may need to avoid?
As someone who has written lots of simulations, there are a few reasons.
1) The simulation deliberately simplifies or changes some things from reality. At minimum, when “noise” is required an algorithm is used to generate numbers which have many of the properties of random numbers but a) are not in fact random, b) are usually much more accurately described by a particular mathematical distribution than would any measurements of the actual noise in the system be.
2) The simulation accidentally simplifies/changes LOTS of things from reality. A brain simulation at the neuron level is likely to simulate observed variations using a noise generator, when these variations arise from a) a ream of detailed motions of individual ions and b) quantum interactions. The claim is generally made that one can simulate at a more and more detailed level AND GET TO THE ENDPOINT where the simulation is “perfect.” The getting to the endpoint claim is not only unproven, but highly suspect. At every level of physics we have investigated so far, we have always found a deeper level. Further, the equations of motions at these deepest layers are not known in complete detail. So even if we can get to an endpoint, we have no reason to believe we have gotten to the endpoint in any given simulation. At some point, we are no longer compute bound, we are knowledge bound.
3) There is a great insight in software that “if it isn’t tested, its broken.” How do you even test a supremely deep simulation of yourself? If there are features of yourself you are still learning, you can’t test for them. Until you comprehensievly comprehend yourself, you can never know that a simulation was comprehensively similar.
Even something as simple as a coin toss simulation is likely to be “wrong” in detail. Perhaps you know the coin toss you are actually simulating has .500 or even .500000000 probability of giving heads (where number of zeros represents accuracy to which you know it.) But what is your confidence that the true expectation is 0.5 with a googleplex zeros following (or 3^^3 zeros to pretend to try to fit in here) is the experimental fact? Even 64 zeros would be a bitch to prove. And what are the chances that your simulation gets a ’true expectation” of 0.5 with even 64 zeros after it? With the coin toss, the variance might SEEM trivial, but consider the same uncertainty in the human. You need to predict my next post keystroke for keystroke, which necessarily includes a prediction of whether I will eat an egg for breakfast or a bowel of cereal because the posts I read while eating depend on that. And so on and so on.
My claim is that the existence of an endpoint in finally getting the simulation complete is at best vastly beyond our knowledge (and not in a compute bound way) and at worst simply unknowable for a ream of good reasons. My estimate of the probability that a simulation will ever be reliably known is < 0.01%.
Now we may get to a much easier place: good enough to convince others. That someone can write a simulation of me that cannot be distinguished from me by people who know me is a much lower bar than that the simulatino feels the same as me to itself. To convince others, the simulation may not even have to be conscious, for example. But you are going to have to build your simulation in to a fat human body good enough to fool my wife, and give it a variety of nervous and personality disorders that cause it to come up with digs that are deeply disturbing to her to do even that.
At some point, the comprehensive difficulty of a problem has to open the question: is it reasonable to sweep this under the rug by appealing to an unknown future of much greater capability than we have now, or is doing that a human bias we may need to avoid?