First, remember that Kolmogorov complexity is only well-defined up to a constant, which is determined by your model of computation. [...]
Right, if you do something like (say) take away the addition instruction then the shortest program might get longer because it has to emulate addition using subtraction and negation (or use a totally different approach, or include a translation pass that expands the additions into negate+subtract or include a universal turing machine that runs the original machine or… yeah).
Second, the article addresses why you can have the paradoxical situation of the Universe being low-complexity while specific things in the Universe are of high complexity.
In this case I care about the complexity of the laws that govern the change in state (positions of atoms or values of the wave functions) with respect to time. I will not make you pay for the presumably absurd amount of data required for the initial state of >10^(82) atoms. That would make talking about the complexity of the laws laughably negligible. I realize that this is, in a sense, an arbitrary distinction but… that’s the question I want to know the answer to.
According to string theory (which is a Universal theory in the sense that it is Turing-complete) the landscape of possible Universes is 2^500 or so, which leads to 500 bits of information. Perhaps this is where Eliezer got the figure from (though I admit that I don’t exactly know where he got it from either).
Interesting guess at where the number could have come from. I just assumed he tallied up various aspects of the standard model somehow and got an answer between 450 and 510.
I will not make you pay for the presumably absurd amount of data required for the initial state of >10^(82) atoms.
You don’t have any atoms in the initial state—nor anything that can reasonably be called matter. My (very ignorant) guess is that we won’t even know what it takes to specify the initial state before we have a unified GR+QFT theory.
According to string theory (which is a Universal theory in the sense that it is Turing-complete) the landscape of possible Universes is 2^500 or so, which leads to 500 bits of information.
If one wishes to describe oneself in a particular universe, then, assuming MWI, fixing the universe is peanuts, complexity-wise, compared to fixing the appropriate Everett branch. The number of bits there is just astounding, it would seem to me.
You can see it in just five lines here, page 1. And an even more compact formulation would just list the symmetry groups, the various fields and how they transform under each group, and would then stipulate that the Lagrangian contains every possible renormalizable term (which is a principle in the construction of such theories, since renormalizable terms that aren’t included get generated anyway).
Right, if you do something like (say) take away the addition instruction then the shortest program might get longer because it has to emulate addition using subtraction and negation (or use a totally different approach, or include a translation pass that expands the additions into negate+subtract or include a universal turing machine that runs the original machine or… yeah).
In this case I care about the complexity of the laws that govern the change in state (positions of atoms or values of the wave functions) with respect to time. I will not make you pay for the presumably absurd amount of data required for the initial state of >10^(82) atoms. That would make talking about the complexity of the laws laughably negligible. I realize that this is, in a sense, an arbitrary distinction but… that’s the question I want to know the answer to.
Interesting guess at where the number could have come from. I just assumed he tallied up various aspects of the standard model somehow and got an answer between 450 and 510.
The standard model is over here, see page 36: http://arxiv.org/pdf/hep-th/0610241v1.pdf
You don’t have any atoms in the initial state—nor anything that can reasonably be called matter. My (very ignorant) guess is that we won’t even know what it takes to specify the initial state before we have a unified GR+QFT theory.
If one wishes to describe oneself in a particular universe, then, assuming MWI, fixing the universe is peanuts, complexity-wise, compared to fixing the appropriate Everett branch. The number of bits there is just astounding, it would seem to me.
Uh, wow, that’s a somewhat large equation. It has like 500 terms. Seems… inconsistent with physicists seeing beauty in physics.
You can see it in just five lines here, page 1. And an even more compact formulation would just list the symmetry groups, the various fields and how they transform under each group, and would then stipulate that the Lagrangian contains every possible renormalizable term (which is a principle in the construction of such theories, since renormalizable terms that aren’t included get generated anyway).