This post puts some numbers to the possible configurations you can get for a single lump of matter of about 1.5 kilograms.
Said reference post by AndrewHickey starts with a ridiculous assumption:
Assume, for a start, that all the information in your brain is necessary to resurrect you, down to the quantum level.
This is voodoo-quantum consciousness: the idea that your mind-identity somehow depends on details down to the quantum state. This can’t possibly be true—because the vast vast majority of that state changes rapidly from quantum moment to moment in a mostly random fashion. There thus is no single quantum state that corresponds uniquely to a mind, rather there is a vast configuration space.
You can reduce that space down to a smaller bit representation by removing redundant details. Does it really matter if I remove one molecule from one glial cell in your brain? The whole glial cell? All the glial cells?
There is a single minimal representation of a computer—it reduces exactly down to it’s circuit diagram and the current values it holds in it’s memory/storage.
If you don’t buy into the idea that a human mind ultimately reduces down to some functional equivalent computer program, than of course the entire Simulation Argument won’t follow.
In a simulation of Earth, far more matter than that is in a completely unknown state and free to vary through a huge portion of its possibility space.
Who cares?
There could be infinite detail in the universe—we could find that there are entire layers beneath the quantum level, recursing to infinity, such that perfect simulation was impossible in principle .. and it still wouldn’t matter in the slightest.
You only need as much detail in the simulation as . . you want detail in the simulation.
Some details at certain spatial scales are more important than others based on their leverage casual effect—such as the bit values in computers, synaptic weights in brains.
A simulation at the human-level scale would only need enough detail to simulate conscious humans, which will probably include simulating down to rough approximations to synaptic-net equivalents. I doubt you would even simulate every cell in the body, for example—unless that itself was what you were really interested in.
There is another significant mistake in typical feasibility critique of simulationism: assuming your current knowledge of algorithmic simulation is the absolute state of the art for now to eternity, the final word, and superintelligences won’t improve on it in the slightest.
As a starting example, AndrewHickey and you both appear to be assuming that the simulation must maintain full simulation fidelity across the entire spatio-temporal field. This is a primitive algorithm. A better approach is to adaptively subdivide space-time and simulate at multiple scales at varying fidelity using importance sampling, for example.
This is voodoo-quantum consciousness: the idea that your mind-identity somehow depends on details down to the quantum state. This can’t possibly be true—because the vast vast majority of that state changes rapidly from quantum moment to moment in a mostly random fashion. There thus is no single quantum state that corresponds uniquely to a mind, rather there is a vast configuration space.
That assumption is not part of my argument. The states of objects outside the people you’re simulating ultimately effect everything else once the changes propagate far enough down the simulation.
You can reduce that space down to a smaller bit representation by removing redundant details. Does it really matter if I remove one molecule from one glial cell in your brain? The whole glial cell? All the glial cells?
Underestimating the importance of glial cells could get you a pretty bad model of the brain. But my point isn’t simply about the thoughts you’d have to simulate; remove one glial cell from a person’s brain, and the gravitational effects mean that if they throw a superball really hard, after enough bounces it’ll end up somewhere entirely different than it would have (calculating the trajectories of superballs is one of the best ways to appreciate the propagation of small changes.)
Who cares?
There could be infinite detail in the universe—we could find that there are entire layers beneath the quantum level, recursing to infinity, such that perfect simulation was impossible in principle .. and it still wouldn’t matter in the slightest.
You only need as much detail in the simulation as . . you want detail in the simulation.
Why would you want as much detail in the simulation as we observe in our reality?
Said reference post by AndrewHickey starts with a ridiculous assumption:
This is voodoo-quantum consciousness: the idea that your mind-identity somehow depends on details down to the quantum state. This can’t possibly be true—because the vast vast majority of that state changes rapidly from quantum moment to moment in a mostly random fashion. There thus is no single quantum state that corresponds uniquely to a mind, rather there is a vast configuration space.
You can reduce that space down to a smaller bit representation by removing redundant details. Does it really matter if I remove one molecule from one glial cell in your brain? The whole glial cell? All the glial cells?
There is a single minimal representation of a computer—it reduces exactly down to it’s circuit diagram and the current values it holds in it’s memory/storage.
If you don’t buy into the idea that a human mind ultimately reduces down to some functional equivalent computer program, than of course the entire Simulation Argument won’t follow.
Who cares?
There could be infinite detail in the universe—we could find that there are entire layers beneath the quantum level, recursing to infinity, such that perfect simulation was impossible in principle .. and it still wouldn’t matter in the slightest.
You only need as much detail in the simulation as . . you want detail in the simulation.
Some details at certain spatial scales are more important than others based on their leverage casual effect—such as the bit values in computers, synaptic weights in brains.
A simulation at the human-level scale would only need enough detail to simulate conscious humans, which will probably include simulating down to rough approximations to synaptic-net equivalents. I doubt you would even simulate every cell in the body, for example—unless that itself was what you were really interested in.
There is another significant mistake in typical feasibility critique of simulationism: assuming your current knowledge of algorithmic simulation is the absolute state of the art for now to eternity, the final word, and superintelligences won’t improve on it in the slightest.
As a starting example, AndrewHickey and you both appear to be assuming that the simulation must maintain full simulation fidelity across the entire spatio-temporal field. This is a primitive algorithm. A better approach is to adaptively subdivide space-time and simulate at multiple scales at varying fidelity using importance sampling, for example.
That assumption is not part of my argument. The states of objects outside the people you’re simulating ultimately effect everything else once the changes propagate far enough down the simulation.
Underestimating the importance of glial cells could get you a pretty bad model of the brain. But my point isn’t simply about the thoughts you’d have to simulate; remove one glial cell from a person’s brain, and the gravitational effects mean that if they throw a superball really hard, after enough bounces it’ll end up somewhere entirely different than it would have (calculating the trajectories of superballs is one of the best ways to appreciate the propagation of small changes.)
Why would you want as much detail in the simulation as we observe in our reality?