I think you are conflating two different problems:
How to learn by reinforcement in an unknown non-ergodic environment (e.g. one where it is possible to drop an anvil on your head)
How to make decisions that take into account future reward, in a non-ergodic environment, where actions may modify the agent.
The first problem is well known the reinforcement learning community, and in fact it is mentioned also in the first AIXI papers, but it is sidestepped with an ergodicity assumption, rather than addressed. I don’t think there can be really general solutions for this problem: you need some environment-specific prior or supervision.
The second problem doesn’t seem as hard as the first one. AIXI, of course, can’t model self-modifications, because it is incomputable and it can only deal with computable environments, but computable varieties of AIXI (Schmidhuber’s Gödel machine, perhaps?) can easily represent themselves as part of the environment.
I think you are conflating two different problems:
How to learn by reinforcement in an unknown non-ergodic environment (e.g. one where it is possible to drop an anvil on your head)
How to make decisions that take into account future reward, in a non-ergodic environment, where actions may modify the agent.
The first problem is well known the reinforcement learning community, and in fact it is mentioned also in the first AIXI papers, but it is sidestepped with an ergodicity assumption, rather than addressed.
I don’t think there can be really general solutions for this problem: you need some environment-specific prior or supervision.
The second problem doesn’t seem as hard as the first one.
AIXI, of course, can’t model self-modifications, because it is incomputable and it can only deal with computable environments, but computable varieties of AIXI (Schmidhuber’s Gödel machine, perhaps?) can easily represent themselves as part of the environment.