I’m more uncertain about this one, but I believe that a separate problem with this answer is that it’s an argument about where value comes from, not an argument about what is probable. Let’s suppose 50% of all worlds are fragile and 50% are robust. If most of the things that destroy a world are due to emerging technology, then we still have similar amounts of both worlds around right now (or similar measure on both classes if they’re infinite many, or whatever). So it’s not a reason to suspect a non-fragile world right now.
Another illustration: if you’re currently falling from a 90-story building, most of the expected utility is in worlds where there coincidentally happens to be a net to safely catch you before you hit the ground, or interventionist simulators decide to rescue you—even if virtually all of the probability is in worlds where you go splat and die. The decision theory looks right, but this is a lot less comforting than the interview made it sound.
Yes, but the fact that the fragile worlds are much more likely to end in the future is a reason to condition your efforts on being in a robust world.
While I do buy Paul’s argument, I think it’d be very helpful if the various summaries of the interviews with him were edited to make it clear that he’s talking about value-conditioned probabilities rather than unconditional probabilities—since the claim as originally stated feels misleading. (Even if some decision theories only use the former, most people think in terms of the latter).
Is this a thing or something you just coined? “Probability” has a meaning, I’m totally against using it for things that aren’t that.
I get why the argument is valid for deciding what we should do – and you could argue that’s the only important thing. But it doesn’t make it more likely that our world is robust, which is what the post was claiming. It’s not about probability, it’s about EV.
I’m more uncertain about this one, but I believe that a separate problem with this answer is that it’s an argument about where value comes from, not an argument about what is probable. Let’s suppose 50% of all worlds are fragile and 50% are robust. If most of the things that destroy a world are due to emerging technology, then we still have similar amounts of both worlds around right now (or similar measure on both classes if they’re infinite many, or whatever). So it’s not a reason to suspect a non-fragile world right now.
Another illustration: if you’re currently falling from a 90-story building, most of the expected utility is in worlds where there coincidentally happens to be a net to safely catch you before you hit the ground, or interventionist simulators decide to rescue you—even if virtually all of the probability is in worlds where you go splat and die. The decision theory looks right, but this is a lot less comforting than the interview made it sound.
Yes, but the fact that the fragile worlds are much more likely to end in the future is a reason to condition your efforts on being in a robust world.
While I do buy Paul’s argument, I think it’d be very helpful if the various summaries of the interviews with him were edited to make it clear that he’s talking about value-conditioned probabilities rather than unconditional probabilities—since the claim as originally stated feels misleading. (Even if some decision theories only use the former, most people think in terms of the latter).
Is this a thing or something you just coined? “Probability” has a meaning, I’m totally against using it for things that aren’t that.
I get why the argument is valid for deciding what we should do – and you could argue that’s the only important thing. But it doesn’t make it more likely that our world is robust, which is what the post was claiming. It’s not about probability, it’s about EV.