And Greg Egan’s counter-argument to the Dust Theory is pretty decent:
However, I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.
I think the same counter-argument applies to your hypothesis.
A steelmanned version of Egan’s counterargument can be found in what Tegmark calls the (cosmological) measure problem. Egan’s original counterargument is too weak because we can simply postulate that there is an appropriate measure over the worlds of interest; we already do that for the many-worlds interpretation!
One such issue is the above-mentioned measure problem, which is in essence the problem of how to deal with annoying infinities and predict conditional probabilities for what an observer should perceive given past observations.
[...]
A second testable prediction of the MUH [Mathematical Universe Hypothesis] is that the Level IV multiverse [the multiverse of all mathematical structures] exists, so that out of all universes containing observers like us, we should expect to find ourselves in a rather typical one. Rigorously carrying out this test requires solving the measure problem, i.e., computing conditional probabilities for observable quantities given other observations (such as our existence) and an assumed theory (such as the MUH, or the hypothesis that only some specific mathematical structure like string theory or the Lie superalgebra mb(3|8) [142] exists). Further work on all aspects of the measure problem is urgently needed regardless of whether the MUH is correct, as this is necessary for observationally testing any theory that involves parallel universes at any level, including cosmological inflation and the string theory landscape [67–71]. Although we are still far from understanding selection effects linked to the requirements for life, we can start testing multiverse predictions by assessing how typical our universe is as regards dark matter, dark energy and neutrinos, because these substances affect only better understood processes like galaxy formation. Early such tests have suggested (albeit using questionable assumptions) that the observed abundance of these three substances is indeed rather typical of what you might measure from a random stable solar system in a multiverse where these abundances vary from universe to universe [42, 134–139].
Tegmark makes a few remarks on using algorithmic complexity as the measure:
It is unclear whether some sort of measure over the Level IV multiverse is required to fully resolve the measure problem, but if this is the case and the CUH [Computable Universe Hypothesis] is correct, then the measure could depend on the algorithmic complexity of the mathematical structures, which would be finite. Labeling them all by finite bit strings s interpreted as real numbers on the unit interval [0, 1) (with the bits giving the binary decimals), the most obvious measure for a given structure S would be the fraction of the unit interval covered by real numbers whose bit strings begin with strings s defining S. A string of length n bits thus gets weight 2^(−n), which means that the measure rewards simpler structures. The analogous measure for computer programs is advocated in [16]. A major concern about such measures is of course that they depend on the choice of representation of structures or computations as bit strings, and no obvious candidate currently exists for which representation to use.
Each of the analogous problems in eternal inflation and the string theory landscape is also called the measure problem (in eternal inflation: how to assign measure over the potentially infinite number of inflationary bubbles; in the string theory landscape: how to assign measure over the astronomical number of false vacua).
In the many-worlds interpretation, the analogous measure problem is resolved by the Born probabilities.
An example of a measure in this context would be the complexity measure that Tegmark mentioned, as long as we agree on a way to encode mathematical structures (the nonuniqueness of representation is one of the issues that Tegmark brought up).
Whether this is an appropriate measure (i.e., whether it correctly “predicts conditional probabilities for what an observer should perceive given past observations”) is unknown; if we knew how to find out, then we could directly resolve the measure problem!
An example of a context where we can give the explicit measure is in the many-words interpretation, where as I mentioned, the Born probabilities resolve the analogous measure problem.
An example of a context where we can give the explicit measure is in the many-words interpretation, where as I mentioned, the Born probabilities resolve the analogous measure problem.
So you are saying that the “Born probabilities” are an example of an “appropriate measure” which, if “postulated,” rebuts Egan’s argument?
The Born probabilities apply to a different context—the multiple Everett branches of MWI, rather than the interpretative universes available under dust theory. If we had an equivalent of the Born probabilities—a measure—for dust theory, then we’d be able to resolve Egan’s argument one way or another (depending on which way the numbers came out under this measure).
Since we don’t yet know what the measure is, it’s not clear whether Egan’s argument holds—under the “Tengmark computational complexity measure” Egan would be wrong, under the “naive measure” Egan is right. But we need some external evidence to know which measure to use. (By contrast in the QM case we know the Born probabilities are the correct ones to use, because they correspond to experimental results (and also because e.g. they’re preserved under a QM system’s unitary evolution)).
I would guess you are probably correct that Egan’s argument hinges on this point. In essence, Egan seems to be making an informal claim about the relatively likelihood of an orderly dust universe versus a chaotic one.
Boiled down to its essentials, VincentYu’s argument seems to be that if Egan’s informal claim is incorrect, then Egan’s argument fails. Well duh.
I’m not sure I agree with that argument. The fact that quantum mechanics exists, and there are specifically allowed states, is exactly the type of thing I’d expect from a universe driven by a computer simulation. Discrete values are much easier than continuous sets.
On the other hand, superposition and entanglement seem suboptimal.
The fact that quantum mechanics exists, and there are specifically allowed states, is exactly the type of thing I’d expect from a universe driven by a computer simulation.
I’m not sure I understand your point. Are you saying that a simulation which is just a mathematical construct would probably not result in a quantized universe?
I was intending to say the opposite; that a quantized world would seem like it would take less computational power than a continuous one, therefore the fact that we live in a quantized world is evidence of being in a simulation.
I was intending to say the opposite; that a quantized world would seem like it would take less computational power than a continuous one, therefore the fact that we live in a quantized world is evidence of being in a simulation.
That’s not an unreasonable point, but I think it goes more to the issue of simulation versus non-simulation than the issue of computer-based simulation versus mathematical construct simulation.
Glad to see this has been thought of; that argument was where I was headed in [3] (and this whole line of thought greatly annoyed me when reading Permutation City, so I’m glad Egan’s at least looked at it a bit).
This gets us a contradiction, not a refutation, and one man’s modus ponens is another man’s modus tollens. Can we use this to argue for a flaw in the original simulation argument? I think it again comes down to anthropics: why are our subjective experiences reverse-anthropically more likely than those of dust arrangements? And into which class would simulated people fall?
Can we use this to argue for a flaw in the original simulation argument?
I don’t think so since it’s reasonable to hypothesize that man-made simulations would, generally speaking, by more on the orderly side as opposed to being full of random nonsense.
But it’s still an interesting question. One can imagine a room with 2 large computers. The first computer has been carefully programmed to simulate 1950s Los Angeles. There are people in the simulation who are completely convinced that the live in Los Angeles in the 1950s.
The second computer is just doing random computations. But arguably there is some cryptographic interpretation of those computations which also yields a simulation of 1950s Los Angeles.
My thought is that your hypothesis is pretty similar to the Dust Theory.
http://sciencefiction.com/2011/05/23/science-feature-dust-theory/
And Greg Egan’s counter-argument to the Dust Theory is pretty decent:
I think the same counter-argument applies to your hypothesis.
A steelmanned version of Egan’s counterargument can be found in what Tegmark calls the (cosmological) measure problem. Egan’s original counterargument is too weak because we can simply postulate that there is an appropriate measure over the worlds of interest; we already do that for the many-worlds interpretation!
In Tegmark (2008) (see my other comment):
Tegmark makes a few remarks on using algorithmic complexity as the measure:
Each of the analogous problems in eternal inflation and the string theory landscape is also called the measure problem (in eternal inflation: how to assign measure over the potentially infinite number of inflationary bubbles; in the string theory landscape: how to assign measure over the astronomical number of false vacua).
In the many-worlds interpretation, the analogous measure problem is resolved by the Born probabilities.
I don’t understand this at all. Can you give an example of such an appropriate measure?
An example of a measure in this context would be the complexity measure that Tegmark mentioned, as long as we agree on a way to encode mathematical structures (the nonuniqueness of representation is one of the issues that Tegmark brought up).
Whether this is an appropriate measure (i.e., whether it correctly “predicts conditional probabilities for what an observer should perceive given past observations”) is unknown; if we knew how to find out, then we could directly resolve the measure problem!
An example of a context where we can give the explicit measure is in the many-words interpretation, where as I mentioned, the Born probabilities resolve the analogous measure problem.
So you are saying that the “Born probabilities” are an example of an “appropriate measure” which, if “postulated,” rebuts Egan’s argument?
Is that correct?
The Born probabilities apply to a different context—the multiple Everett branches of MWI, rather than the interpretative universes available under dust theory. If we had an equivalent of the Born probabilities—a measure—for dust theory, then we’d be able to resolve Egan’s argument one way or another (depending on which way the numbers came out under this measure).
Since we don’t yet know what the measure is, it’s not clear whether Egan’s argument holds—under the “Tengmark computational complexity measure” Egan would be wrong, under the “naive measure” Egan is right. But we need some external evidence to know which measure to use. (By contrast in the QM case we know the Born probabilities are the correct ones to use, because they correspond to experimental results (and also because e.g. they’re preserved under a QM system’s unitary evolution)).
I would guess you are probably correct that Egan’s argument hinges on this point. In essence, Egan seems to be making an informal claim about the relatively likelihood of an orderly dust universe versus a chaotic one.
Boiled down to its essentials, VincentYu’s argument seems to be that if Egan’s informal claim is incorrect, then Egan’s argument fails. Well duh.
Here’s a visual representation of the dust theory by Randall Munroe: http://xkcd.com/505/
I’m not sure I agree with that argument. The fact that quantum mechanics exists, and there are specifically allowed states, is exactly the type of thing I’d expect from a universe driven by a computer simulation. Discrete values are much easier than continuous sets.
On the other hand, superposition and entanglement seem suboptimal.
I’m not sure I understand your point. Are you saying that a simulation which is just a mathematical construct would probably not result in a quantized universe?
I was intending to say the opposite; that a quantized world would seem like it would take less computational power than a continuous one, therefore the fact that we live in a quantized world is evidence of being in a simulation.
That’s not an unreasonable point, but I think it goes more to the issue of simulation versus non-simulation than the issue of computer-based simulation versus mathematical construct simulation.
Well, I suppose we could postulate something like a continuous version of quantum mechanics for a host universe if we’d like.
Glad to see this has been thought of; that argument was where I was headed in [3] (and this whole line of thought greatly annoyed me when reading Permutation City, so I’m glad Egan’s at least looked at it a bit).
This gets us a contradiction, not a refutation, and one man’s modus ponens is another man’s modus tollens. Can we use this to argue for a flaw in the original simulation argument? I think it again comes down to anthropics: why are our subjective experiences reverse-anthropically more likely than those of dust arrangements? And into which class would simulated people fall?
I don’t think so since it’s reasonable to hypothesize that man-made simulations would, generally speaking, by more on the orderly side as opposed to being full of random nonsense.
But it’s still an interesting question. One can imagine a room with 2 large computers. The first computer has been carefully programmed to simulate 1950s Los Angeles. There are people in the simulation who are completely convinced that the live in Los Angeles in the 1950s.
The second computer is just doing random computations. But arguably there is some cryptographic interpretation of those computations which also yields a simulation of 1950s Los Angeles.
I’d like to see that argument. If you can find a mapping that doesn’t end up encoding the simulation in the mapping, I’d be surprised.
Well why should it matter if the simulation is encoded in the mapping?
If it is, that screens off any features of what it’s mapping; you can no longer be surprised that ‘random noise’ produces such output.
Again, so what?
Let me adjust the original thought experiment:
The operation first computer is encrypted using a very large one-time pad.