But ordering over the complexity of your brain, rather than the universe, is already postulating that a lawful universe isn’t the best explanation. You can’t have your cake and eat it too.
A lawful universe is the best explanation for my experiences. My experience is embodied in a particular cognitive process. To describe this process I say:
“Consider the system satisfying the law L. To find Paul within that system, look over here.”
In order to describe the version of me that sees 10 heads in a row, I instead have to say:
“Consider the system satisfying the law L, in which these 10 coins came up heads. To find Paul within that universe, look over here.”
The probability of seeing 10 heads in a row may be slightly higher: adding additional explanations increases the probability of an experience, and the description of “arbitrary change” is easier if the change is to make all 10 outcomes H rather than to set the outcomes in some more complicated way. However, the same effect is present in Solomonoff induction.
There are many more subtleties here, and there are universes which involve randomness in a way where I would predict that HHHHHHHHH is the most likely result from looking at 10 coin flips in a row. But the same things happen with Solomonoff induction, so they don’t seem worth talking about here.
Best explanation by what standard? By the standard where you rank universes from least complex to most complex! You cannot do two different rankings simultaneously.
So then, are you saying that you do not think that a simplicity prior on your brain is a good idea?
Shortest explanation for my thoughts. Precisely a simplicity prior on my brain. There is nothing about universe complexity.
I believe that the shortest explanation for my thoughts is the one that says “Here is the universe. Within the universe, here is this dude.” This is a valid explanation for my brain, and it gets longer if I have to modify it to make my brain “simpler” in the sense you are using, not shorter.
No, it doesn’t. Picking between microstates isn’t a “modification” of the universe, it’s simply talking about the observed probability of something that already happens all the time.
Although now that I think about it, this argument should apply to more traditional anthropics as well, if a simplicity prior is used. And since I’ve done this experiment a few times now, I can say with high confidence that a strong simplicity prior is incorrect when flipping coins (especially when anthropically flipping coins [which means I did it myself]), and a maximum entropy prior is very close to correct.
But ordering over the complexity of your brain, rather than the universe, is already postulating that a lawful universe isn’t the best explanation. You can’t have your cake and eat it too.
A lawful universe is the best explanation for my experiences. My experience is embodied in a particular cognitive process. To describe this process I say:
“Consider the system satisfying the law L. To find Paul within that system, look over here.”
In order to describe the version of me that sees 10 heads in a row, I instead have to say:
“Consider the system satisfying the law L, in which these 10 coins came up heads. To find Paul within that universe, look over here.”
The probability of seeing 10 heads in a row may be slightly higher: adding additional explanations increases the probability of an experience, and the description of “arbitrary change” is easier if the change is to make all 10 outcomes H rather than to set the outcomes in some more complicated way. However, the same effect is present in Solomonoff induction.
There are many more subtleties here, and there are universes which involve randomness in a way where I would predict that HHHHHHHHH is the most likely result from looking at 10 coin flips in a row. But the same things happen with Solomonoff induction, so they don’t seem worth talking about here.
Best explanation by what standard? By the standard where you rank universes from least complex to most complex! You cannot do two different rankings simultaneously.
So then, are you saying that you do not think that a simplicity prior on your brain is a good idea?
Shortest explanation for my thoughts. Precisely a simplicity prior on my brain. There is nothing about universe complexity.
I believe that the shortest explanation for my thoughts is the one that says “Here is the universe. Within the universe, here is this dude.” This is a valid explanation for my brain, and it gets longer if I have to modify it to make my brain “simpler” in the sense you are using, not shorter.
No, it doesn’t. Picking between microstates isn’t a “modification” of the universe, it’s simply talking about the observed probability of something that already happens all the time.
Although now that I think about it, this argument should apply to more traditional anthropics as well, if a simplicity prior is used. And since I’ve done this experiment a few times now, I can say with high confidence that a strong simplicity prior is incorrect when flipping coins (especially when anthropically flipping coins [which means I did it myself]), and a maximum entropy prior is very close to correct.