I see the key flaw in that the more exceptional the promise is, the lower the probability you must assign to it.
According to common LessWrong ideas, lowering the probability based on the exceptionality of the promise would mean lowering it based on the Kolomogorov complexity of the promise.
If you do that, you won’t lower the probability enough to defeat the mugging.
If you can lower the probability more than that, of course you can defeat the mugging.
If you can lower the probability more than that, of course you can defeat the mugging.
And one of the key problems with lowering it more is that it becomes really really hard to update when you get evidence that the mugging is real.
If you do that, you won’t lower the probability enough to defeat the mugging.
If you do that, your decision system just breaks down, since the expectation over arbitrary integers with probabilities computer by Solomonoff induction is undefined. That’s the reason why AIXI uses bounded rewards.
According to common LessWrong ideas, lowering the probability based on the exceptionality of the promise would mean lowering it based on the Kolomogorov complexity of the promise.
If you do that, you won’t lower the probability enough to defeat the mugging.
If you can lower the probability more than that, of course you can defeat the mugging.
If you do that, your decision system just breaks down, since the expectation over arbitrary integers with probabilities computer by Solomonoff induction is undefined. That’s the reason why AIXI uses bounded rewards.