That depends on what statements are “expressible” by the AI. If it can use quantification (“this boolean formula is true for some/all inputs”), computing the prior becomes NP-hard. An even trickier case: imagine you program the AI with ZFC and ask it for its prior about the continuum hypothesis? On the other hand, you may use clever programming to avoid evaluating prior values that are difficult or impossible to evaluate, like Monte Carlo AIXI does.
That depends on what statements are “expressible” by the AI. If it can use quantification (“this boolean formula is true for some/all inputs”), computing the prior becomes NP-hard. An even trickier case: imagine you program the AI with ZFC and ask it for its prior about the continuum hypothesis? On the other hand, you may use clever programming to avoid evaluating prior values that are difficult or impossible to evaluate, like Monte Carlo AIXI does.
Far out.