I’m saying I don’t understand what you’re measuring. Does a world with a suffering simulation exist, given the OP’s scenario, or not?
If it does, then the proliferation of other worlds doesn’t matter unless they contain something that might offset the pain. If they’re morally neutral they can number Aleph-1 and it won’t make any difference.
Decision-making in many-worlds is exactly identical to ordinary decision-making. You weight the utility of possible outcomes by their measure, and add them up into an expected utility. The bad stuff in one of those outcomes only feels more important when you phrase it in terms of many-worlds, because a certainty of small bad stuff often feels worse than a chance of big bad stuff, even when the expected utility is the same.
I’m saying I don’t understand what you’re measuring. Does a world with a suffering simulation exist, given the OP’s scenario, or not?
If it does, then the proliferation of other worlds doesn’t matter unless they contain something that might offset the pain. If they’re morally neutral they can number Aleph-1 and it won’t make any difference.
Decision-making in many-worlds is exactly identical to ordinary decision-making. You weight the utility of possible outcomes by their measure, and add them up into an expected utility. The bad stuff in one of those outcomes only feels more important when you phrase it in terms of many-worlds, because a certainty of small bad stuff often feels worse than a chance of big bad stuff, even when the expected utility is the same.