Robin’s argument relies on infinite certainty in a particular view of anthropic questions. It penalizes the probability significantly, but doesn’t on its own defeat infinity concerns.
If you use EDT, then Robin’s argument cashes out as: “if there are 3^^^^3 people, then the effects of my decisions via the typical copies of me are multiplied up by O(3^^^^3), while the effects of my decisions via the lottery winner aren’t.” So then the effects balance out, and you are down to the same reasoning as if you accepted the anthropic argument. But now you get a similar conclusion even if you assign 1% probability to “I have no idea what’s going on re: anthropic reasoning.”
Do you think that works?
(Infinity still gets you into trouble with divergent sums, but this seems to work fine if you have a finite but large cap on the value of the universe.)
Coincidentally I just posted on this without having seen the OP.
Robin’s argument relies on infinite certainty in a particular view of anthropic questions. It penalizes the probability significantly, but doesn’t on its own defeat infinity concerns.
If you use EDT, then Robin’s argument cashes out as: “if there are 3^^^^3 people, then the effects of my decisions via the typical copies of me are multiplied up by O(3^^^^3), while the effects of my decisions via the lottery winner aren’t.” So then the effects balance out, and you are down to the same reasoning as if you accepted the anthropic argument. But now you get a similar conclusion even if you assign 1% probability to “I have no idea what’s going on re: anthropic reasoning.”
Do you think that works?
(Infinity still gets you into trouble with divergent sums, but this seems to work fine if you have a finite but large cap on the value of the universe.)
Coincidentally I just posted on this without having seen the OP.
Yes, but then you’re acting on probabilities of ludicrous utilities again, an empirical “stabilizing assumption” in Bostrom’s language.