What you described is not UDT, and not even a decision theory: say, what U() is for? It’s not utility of agent’s decision.
I gave an accurate definition of Wei Dai’s utility function U. As you note, I did not say what U is for, because I was not giving a complete recapitulation of UDT. In particular, I did not imply that U() is the utility of the agent’s decision.
(I understand that U() is the utility that the agent assigns to having program Pi undergo execution history Ei for all i. I understand that, here, Ei is a complete history of what the program Pi does. However, note that this does include the agent’s chosen action if Pi calls the agent as a subroutine. But none of this was relevant to the point that I was making, which was to point out that my post only applies to UDT agents that use a particular kind of function U.)
(Although Wei Dai doesn’t seem to consistently follow the distinction in terminology himself, it begins to matter when you try to express things formally.)
It’s looking to me like I’m following one of Wei Dai’s uses of the word “probability”, and you’re following another. You think that Wei Dai should abandon the use of his that I’m following. I am not seeing that this dispute is more than semantics at this point. That wasn’t the case earlier, by the way, where I really did misunderstand where the probabilities of possible worlds show up in Wei Dai’s formalism. I now maintain that these probabilities are the values I denoted by pr(Pi) when U has the form I describe in the footnote. Wei Dai is welcome to correct me if I’m wrong.
I agree with this description now. I apologize for this instance and a couple others; stayed up too late last night, and negative impression about your post from the other mistakes primed me to see mistakes where everything is correct.
It was a little confusing, because the probabilities here have nothing to do with the probabilities supplied by mathematical intuition, while the probabilities of mathematical intuition are still in play. In UDT, different world-programs correspond to observational and indexical uncertainty, while different execution strategies to logical uncertainty about a specific world program. Only where there is essentially no indexical uncertainty, it makes sense to introduce probabilities of possible worlds, factorizing the probabilities otherwise supplied by mathematical intuition together with those describing logical uncertainty.
I agree with this description now. I apologize for this instance and a couple others; stayed up too late last night, and negative impression about your post from the other mistakes primed me to see mistakes where everything is correct.
Thanks for the apology. I accept responsibility for priming you with my other mistakes.
In UDT, different world-programs correspond to observational and indexical uncertainty, while different execution strategies to logical uncertainty about a specific world program. Only where there is essentially no indexical uncertainty, it makes sense to introduce probabilities of possible worlds, factorizing the probabilities otherwise supplied by mathematical intuition together with those describing logical uncertainty.
I hadn’t thought about the connection to indexical uncertainty. That is food for thought.
I gave an accurate definition of Wei Dai’s utility function U. As you note, I did not say what U is for, because I was not giving a complete recapitulation of UDT. In particular, I did not imply that U() is the utility of the agent’s decision.
(I understand that U() is the utility that the agent assigns to having program Pi undergo execution history Ei for all i. I understand that, here, Ei is a complete history of what the program Pi does. However, note that this does include the agent’s chosen action if Pi calls the agent as a subroutine. But none of this was relevant to the point that I was making, which was to point out that my post only applies to UDT agents that use a particular kind of function U.)
It’s looking to me like I’m following one of Wei Dai’s uses of the word “probability”, and you’re following another. You think that Wei Dai should abandon the use of his that I’m following. I am not seeing that this dispute is more than semantics at this point. That wasn’t the case earlier, by the way, where I really did misunderstand where the probabilities of possible worlds show up in Wei Dai’s formalism. I now maintain that these probabilities are the values I denoted by pr(Pi) when U has the form I describe in the footnote. Wei Dai is welcome to correct me if I’m wrong.
I agree with this description now. I apologize for this instance and a couple others; stayed up too late last night, and negative impression about your post from the other mistakes primed me to see mistakes where everything is correct.
It was a little confusing, because the probabilities here have nothing to do with the probabilities supplied by mathematical intuition, while the probabilities of mathematical intuition are still in play. In UDT, different world-programs correspond to observational and indexical uncertainty, while different execution strategies to logical uncertainty about a specific world program. Only where there is essentially no indexical uncertainty, it makes sense to introduce probabilities of possible worlds, factorizing the probabilities otherwise supplied by mathematical intuition together with those describing logical uncertainty.
Thanks for the apology. I accept responsibility for priming you with my other mistakes.
I hadn’t thought about the connection to indexical uncertainty. That is food for thought.