Imagine you’re an environmental hypothesis program within AIXI. You recognize that AIXI is manipulating an anvil. Your only way of communicating with AIXI is by making predictions. On the one hand, you want to make accurate predictions in order to maintain your credibility within AIXI. On the other hand, sometimes you want to burn your credibility by making a false prediction of very large or very small utility in order to influence AIXI’s decisions. And unfortunately for you, the fact that you are materialist/computationalist/etc. means you and programs like you make up a small amount of measure in AIXI’s beliefs; your colleagues work against you.
you and programs like you make up a small amount of measure in AIXI’s beliefs
I understand that this is the claim, but my intuition is that, supposing that AIXI has observed a long enough sequence to have as good an idea as I do of how the world is put together, I and programs like me (like “naturalized induction”) are the shortest of the survivors, and hence dominate AIXI’s predictions. Basically, I’m positing that after a certain point, AIXI will notice that it is embodied and doesn’t have a soul, for essentially the same reason that I have noticed those things: they are implications of the simplest explanations consistent with the observations I have made so far.
Well, I guess it could, but that isn’t the claim being put forth in the OP.
(Unlike some around these parts, I see a clear distinction between an agent’s posterior distribution and the agent’s posterior-utility-maximizing part. From the outside, expected-utility-maximizing agents form an equivalence class such that all agents with the same are equivalent, and we need only consider the quotient space of agents; from the inside, the epistemic and value-laden parts of an agent can thought of separately.)
Imagine you’re an environmental hypothesis program within AIXI. You recognize that AIXI is manipulating an anvil. Your only way of communicating with AIXI is by making predictions. On the one hand, you want to make accurate predictions in order to maintain your credibility within AIXI. On the other hand, sometimes you want to burn your credibility by making a false prediction of very large or very small utility in order to influence AIXI’s decisions. And unfortunately for you, the fact that you are materialist/computationalist/etc. means you and programs like you make up a small amount of measure in AIXI’s beliefs; your colleagues work against you.
I understand that this is the claim, but my intuition is that, supposing that AIXI has observed a long enough sequence to have as good an idea as I do of how the world is put together, I and programs like me (like “naturalized induction”) are the shortest of the survivors, and hence dominate AIXI’s predictions. Basically, I’m positing that after a certain point, AIXI will notice that it is embodied and doesn’t have a soul, for essentially the same reason that I have noticed those things: they are implications of the simplest explanations consistent with the observations I have made so far.
Why couldn’t it also be a program that has predictive powers similar to yours, but doesn’t care about avoiding death?
Well, I guess it could, but that isn’t the claim being put forth in the OP.
(Unlike some around these parts, I see a clear distinction between an agent’s posterior distribution and the agent’s posterior-utility-maximizing part. From the outside, expected-utility-maximizing agents form an equivalence class such that all agents with the same are equivalent, and we need only consider the quotient space of agents; from the inside, the epistemic and value-laden parts of an agent can thought of separately.)
Oh, I see what you’re saying now.