Well, if you understand “choose the strategy that maximizes your unconditional expected utility”, with it being implicit that other beings in the universe may be able to ‘see’ your strategy regardless of whether or not you’ve executed it yet, then you pretty much understand UDT.
If you already understood this before my post, then it won’t have been helpful. If you aren’t able to understand that, even after reading my post, but have the prerequisites, then something’s going wrong somewhere.
I’ve only skimmed it so far, and I like the diagrams, but I think they would be helped tremendously by doing something to the Util boxes that indicates their relative goodness.
I agree—AlephNeil, you should add payoffs to the diagrams and perhaps textual descriptions of the games.
I also think that this analysis should be polished up and published in a philosophy or game theory journal, assuming that it’s sound and that no one else came up with it before. Newcomb-like problems are much debated in philosophy, and finding a reformulation where the “rational” strategy is to one-box may be a fairly big deal.
Thanks, but it’s not my theory—it’s by Wei Dai and Vladimir Nesov.
you should add payoffs to the diagrams and perhaps textual descriptions of the games.
Yes, in hindsight this would have made the post much more accessible. Somehow I was imagining that this community has been ‘bathed’ in these problems for so long that nearly everyone would instantly ‘get’ the diagrams… or if they didn’t then it would be easy and fun to ‘fill in the gaps’, rather than difficult and confusing.
Well, if you understand “choose the strategy that maximizes your unconditional expected utility”, with it being implicit that other beings in the universe may be able to ‘see’ your strategy regardless of whether or not you’ve executed it yet, then you pretty much understand UDT.
If you already understood this before my post, then it won’t have been helpful. If you aren’t able to understand that, even after reading my post, but have the prerequisites, then something’s going wrong somewhere.
I’ve only skimmed it so far, and I like the diagrams, but I think they would be helped tremendously by doing something to the Util boxes that indicates their relative goodness.
I agree—AlephNeil, you should add payoffs to the diagrams and perhaps textual descriptions of the games.
I also think that this analysis should be polished up and published in a philosophy or game theory journal, assuming that it’s sound and that no one else came up with it before. Newcomb-like problems are much debated in philosophy, and finding a reformulation where the “rational” strategy is to one-box may be a fairly big deal.
Thanks, but it’s not my theory—it’s by Wei Dai and Vladimir Nesov.
Yes, in hindsight this would have made the post much more accessible. Somehow I was imagining that this community has been ‘bathed’ in these problems for so long that nearly everyone would instantly ‘get’ the diagrams… or if they didn’t then it would be easy and fun to ‘fill in the gaps’, rather than difficult and confusing.