If you are doing perfect baysian updates and nothing else, that is an endorsed update.
Note that in a baysian framework, you can assign 50% probability to some statement, and be confident that you will still assign 50% probability next year, if you don’t expect any new evidence. You can also assign 50% probability, with the expectation that it will turn into 0% or 100% in 5 minutes. You can also expect your probabilities to change conditional on future survival. If you are alive next year, you will have updated towards the tumour being non-malignant. But in the toy model where you never die and always do perfect baysian updates like AIXI, the expected future probability is equal to the current probability. (The distribution over future probabilities can be anything with this expectation.)
Actually there is also the possibility of strange loops, if you try to use this rule to determine current belief. You correctly predict that this whole thing happens. So you predict you will update your belief in X to 93%, so you update your belief in X to 93% as predicted. You start believing any nonsense today, just because you predict that you will believe it tomorrow.
In everyday rationality, the noise is usually fairly small compared to the updates. This rule can be used the way Eliezer is using it fairly well. The strange loops aren’t something a human would usually do (as opposed to an AI) If for some reason, your expected future beliefs are easier to calculate than your current beliefs, use those. If you have a box, and you know that when you open the box, for all the things you can imagine seeing in the box, you would believe X, then you should go ahead and believe X now. (If you have a good imagination)
If you are doing perfect baysian updates and nothing else, that is an endorsed update.
Note that in a baysian framework, you can assign 50% probability to some statement, and be confident that you will still assign 50% probability next year, if you don’t expect any new evidence. You can also assign 50% probability, with the expectation that it will turn into 0% or 100% in 5 minutes. You can also expect your probabilities to change conditional on future survival. If you are alive next year, you will have updated towards the tumour being non-malignant. But in the toy model where you never die and always do perfect baysian updates like AIXI, the expected future probability is equal to the current probability. (The distribution over future probabilities can be anything with this expectation.)
Actually there is also the possibility of strange loops, if you try to use this rule to determine current belief. You correctly predict that this whole thing happens. So you predict you will update your belief in X to 93%, so you update your belief in X to 93% as predicted. You start believing any nonsense today, just because you predict that you will believe it tomorrow.
In everyday rationality, the noise is usually fairly small compared to the updates. This rule can be used the way Eliezer is using it fairly well. The strange loops aren’t something a human would usually do (as opposed to an AI) If for some reason, your expected future beliefs are easier to calculate than your current beliefs, use those. If you have a box, and you know that when you open the box, for all the things you can imagine seeing in the box, you would believe X, then you should go ahead and believe X now. (If you have a good imagination)