To what extent would a proof about AIXI’s behavior be normative advice?
Though AIXI itself is not computable, we can prove some properties of the agent—unfortunately, there are fairly few examples because of the “bad universal priors” barrier discovered by Jan Leike. In the sequential case we only know things like e.g. it will not indefinitely keep trying an action that yields minimal reward, though we can say more when the horizon is 1 (which reduces to the predictive case in a sense). And there are lots of interesting results about the behavior of Solomonoff induction (roughly speaking the predictive part of AIXI).
For the sake of argument though, assume we could prove some (more?) interesting statements about AIXI’s strategy—certainly this is possible for us computable beings. But would we want to take those statements as advice, or are we too ignorant to benefit from cargo-culting an inscrutable demigod like AIXI?
To what extent would a proof about AIXI’s behavior be normative advice?
Though AIXI itself is not computable, we can prove some properties of the agent—unfortunately, there are fairly few examples because of the “bad universal priors” barrier discovered by Jan Leike. In the sequential case we only know things like e.g. it will not indefinitely keep trying an action that yields minimal reward, though we can say more when the horizon is 1 (which reduces to the predictive case in a sense). And there are lots of interesting results about the behavior of Solomonoff induction (roughly speaking the predictive part of AIXI).
For the sake of argument though, assume we could prove some (more?) interesting statements about AIXI’s strategy—certainly this is possible for us computable beings. But would we want to take those statements as advice, or are we too ignorant to benefit from cargo-culting an inscrutable demigod like AIXI?