I believe the discussion of UDT is spot on, and a very good summary placing various thought experiments in its context (though reframing Smoking Lesion to get the correct answer seems like cheating).
I have trouble understanding your second point about Sleeping Beauty (and DT-independent probabilities).
Thanks very much! I’m especially pleased that you thought it was accurate.
As for the second point—yeah it seems everyone wants to disagree with me on that :-/
What I describe (perhaps unclearly) is a ‘standard recipe’ for attaching meaning to statements about indexical probabilities (like “I am at the second intersection” in the absent-minded driver problem) which doesn’t depend on decision theory (except in way I noted as the ‘caveat’).
Perhaps it may be objected that there are other recipes. (One such recipe might be ‘take a random branch that has at least one player node on it, then take a random player-instance somewhere along that branch’. This of course gives 1⁄2 as the answer the Sleeping Beauty problem.)
I don’t really have any ‘absolute justification’ for mine, except that it gives the solution to an elegant decision problem: “At every player-instance, try to work out which player-instance you are, so as to minimize -log(subjective probability) at that instance.” (With it being implicit that your final utility is the sum of all such ‘log(subjective probability)’ expressions along the branch.)
You can of course define probability in a way that doesn’t refer to any specific decision theory, thus making it “independent” of decision theories. But probability is useful exactly as half-of-decision-theory, where you just add “utility” ingredient to get the correct decisions out. This doesn’t work well where indexical uncertainty or mind copying are involved, because “probabilities” you get in those situations (defined in such a way that the resulting decisions are as you’d prefer, as in justification of probability by a bet) depend more on your preference than normally. In simpler situations, maximum entropy at least takes care of situations you don’t terminally distinguish in your values, in a way that is independent on further details of your values.
Awesome, you even figured out that anthropic indexical belief updating is exactly what minimizes world-expected total surprisal (when non-indexical beliefs about relative world likelihoods are fixed). The proof is just Jensen’s inequality :)
That’s another thing I’ve been delaying a top-level post on: an “explaining away” of anthropic updating by classifying precisely which decision problems it gives naive correct solutions to. I expect I’ll be too busy for the next couple of months to write it in the detail I’d like, but then you can expect to see some introductory content from me on that… unless you write it yourself, which would be even better!
Have you seen Full Non-Idexical Conditioning? (http://www.cs.toronto.edu/~radford/ftp/anth.pdf) Though the theory is mathematically incorrect, it’s very nearly right, and it’s very similar to your sleeping beaty approach...
I believe the discussion of UDT is spot on, and a very good summary placing various thought experiments in its context (though reframing Smoking Lesion to get the correct answer seems like cheating).
I have trouble understanding your second point about Sleeping Beauty (and DT-independent probabilities).
Thanks very much! I’m especially pleased that you thought it was accurate.
As for the second point—yeah it seems everyone wants to disagree with me on that :-/
What I describe (perhaps unclearly) is a ‘standard recipe’ for attaching meaning to statements about indexical probabilities (like “I am at the second intersection” in the absent-minded driver problem) which doesn’t depend on decision theory (except in way I noted as the ‘caveat’).
Perhaps it may be objected that there are other recipes. (One such recipe might be ‘take a random branch that has at least one player node on it, then take a random player-instance somewhere along that branch’. This of course gives 1⁄2 as the answer the Sleeping Beauty problem.)
I don’t really have any ‘absolute justification’ for mine, except that it gives the solution to an elegant decision problem: “At every player-instance, try to work out which player-instance you are, so as to minimize -log(subjective probability) at that instance.” (With it being implicit that your final utility is the sum of all such ‘log(subjective probability)’ expressions along the branch.)
You can of course define probability in a way that doesn’t refer to any specific decision theory, thus making it “independent” of decision theories. But probability is useful exactly as half-of-decision-theory, where you just add “utility” ingredient to get the correct decisions out. This doesn’t work well where indexical uncertainty or mind copying are involved, because “probabilities” you get in those situations (defined in such a way that the resulting decisions are as you’d prefer, as in justification of probability by a bet) depend more on your preference than normally. In simpler situations, maximum entropy at least takes care of situations you don’t terminally distinguish in your values, in a way that is independent on further details of your values.
Awesome, you even figured out that anthropic indexical belief updating is exactly what minimizes world-expected total surprisal (when non-indexical beliefs about relative world likelihoods are fixed). The proof is just Jensen’s inequality :)
That’s another thing I’ve been delaying a top-level post on: an “explaining away” of anthropic updating by classifying precisely which decision problems it gives naive correct solutions to. I expect I’ll be too busy for the next couple of months to write it in the detail I’d like, but then you can expect to see some introductory content from me on that… unless you write it yourself, which would be even better!
Have you seen Full Non-Idexical Conditioning? (http://www.cs.toronto.edu/~radford/ftp/anth.pdf) Though the theory is mathematically incorrect, it’s very nearly right, and it’s very similar to your sleeping beaty approach...