But, just as HALT-predicting programs are more complex than immortalist programs, other RADICAL-TRANSFORMATION-OF-EXPERIENCE-predicting programs are too. For every program in AIXI’s ensemble that’s a reductionist, there will be simpler agents that mimic the reductionist’s retrodictions and then make non-naturalistic predictions.
So this seems to be the root of the problem. Contrary to what you argued in the previous post, my intuition is that the programs that make non-naturalistic predictions are not shorter. Generically non-naturalistic programs get ruled out during the process of learning how the world works; programs that make non-naturalistic predictions specifically about what AIXI(tl) will experience after smashing itself have to treat the chunk of the Universe carrying out the computation as special, which is what makes them less simple than programs that do not single out that chunk of the Universe as special.
As you can see, my intuition is quite at odds with the intuition inspired by noticing that programs with a HALT instruction are always longer than programs that just chop off said HALT instruction.
programs that make non-naturalistic predictions specifically about what AIXI(tl) will experience after smashing itself have to treat the chunk of the Universe carrying out the computation as special,
Well, any program AIXI gives weight must regard that chunk of the universe as special. After all, it is that chunk that correlates with AIXI’s inputs and actions, and indeed the only reason this universe is considered as a hypothesis is so that that chunk would have those correlations.
The kind of “special” you’re talking about is learnable (and in accord with naturalistic predictions); the kind of “special” I’m talking about is false#Arguments_against_dualism).
So this seems to be the root of the problem. Contrary to what you argued in the previous post, my intuition is that the programs that make non-naturalistic predictions are not shorter. Generically non-naturalistic programs get ruled out during the process of learning how the world works; programs that make non-naturalistic predictions specifically about what AIXI(tl) will experience after smashing itself have to treat the chunk of the Universe carrying out the computation as special, which is what makes them less simple than programs that do not single out that chunk of the Universe as special.
As you can see, my intuition is quite at odds with the intuition inspired by noticing that programs with a HALT instruction are always longer than programs that just chop off said HALT instruction.
Well, any program AIXI gives weight must regard that chunk of the universe as special. After all, it is that chunk that correlates with AIXI’s inputs and actions, and indeed the only reason this universe is considered as a hypothesis is so that that chunk would have those correlations.
The kind of “special” you’re talking about is learnable (and in accord with naturalistic predictions); the kind of “special” I’m talking about is false#Arguments_against_dualism).