math of AIXI assumes the environment is separably divisible—no matter what you lose, you get a chance to win it back later.
There’s nothing preventing you from running AIXItl in an environment that doesn’t have this property. You lose the optimality results, but if you gave it a careful early training period and let it learn physics before giving it full manipulators and access to its own physical instantiation, it might not kill itself.
You could also build a sense of self into its priors, stating that certain parts of the physical world must be preserved, or else all further future rewards will be zero.
There’s nothing preventing you from running AIXItl in an environment that doesn’t have this property. You lose the optimality results, but if you gave it a careful early training period and let it learn physics before giving it full manipulators and access to its own physical instantiation, it might not kill itself.
You could also build a sense of self into its priors, stating that certain parts of the physical world must be preserved, or else all further future rewards will be zero.