More specifically, I’m pretty sure us humans don’t have any negative parts of our utility function that grow exponentially with “badness,” so there’s no bad outcome that can overcome the exponential decrease in probability with program size to actually be a significant factor.
Are you going with Torture v Dust Specks here? Or do you just reject Many Worlds? (Or have I missed something?)
It seems to this layman that using quantum randomization would give us no increase or a tiny increase in utility per world, relative to overwriting each bit with 0 or a piece of Loren Ipsum. And as with Dust Specks, if we actually know we might have prevented torture then I’d get a warm feeling which should count towards the total.
Are you going with Torture v Dust Specks here? Or do you just reject Many Worlds?
Neither is relevant in this case. My claim is that it’s not worth spending even a second of time, even a teensy bit of thought, on changing which kind of randomization you use.
Why? Exponential functions drop off really, really quickly. Really quickly. The proportion of of random bit strings that, when booted up, are minds in horrible agony drops roughly as the exponential of the complexity of the idea “minds in horrible agony.” It would look approximately like 2^-(complexity).
To turn this exponentially small chance into something I’d care about, we’d need the consequence to be of exponential magnitude. But it’s not. It’s just a regular number like 1 billion dollars or so. That’s 2^30. It’s nothing. You aren’t going to write a computer program that detects minds in horrible agony using 30 bits. You aren’t going to write one with 500 bits, either (concentration of one part in 10^-151). It’s simply not worth worrying about things that are worth less than 10^-140 cents.
I’m saying I don’t understand what you’re measuring. Does a world with a suffering simulation exist, given the OP’s scenario, or not?
If it does, then the proliferation of other worlds doesn’t matter unless they contain something that might offset the pain. If they’re morally neutral they can number Aleph-1 and it won’t make any difference.
Decision-making in many-worlds is exactly identical to ordinary decision-making. You weight the utility of possible outcomes by their measure, and add them up into an expected utility. The bad stuff in one of those outcomes only feels more important when you phrase it in terms of many-worlds, because a certainty of small bad stuff often feels worse than a chance of big bad stuff, even when the expected utility is the same.
The more competent AIs will be conquering the universe, so it’s value of the universe being optimized in each of the possible ways that’s playing against low measure.
My argument is about utility, but probability is low. On the other hand, with enough computational power a sufficiently clever evolutionary dynamic might well blow up the universe.
More specifically, I’m pretty sure us humans don’t have any negative parts of our utility function that grow exponentially with “badness,” so there’s no bad outcome that can overcome the exponential decrease in probability with program size to actually be a significant factor.
Are you going with Torture v Dust Specks here? Or do you just reject Many Worlds? (Or have I missed something?)
It seems to this layman that using quantum randomization would give us no increase or a tiny increase in utility per world, relative to overwriting each bit with 0 or a piece of Loren Ipsum. And as with Dust Specks, if we actually know we might have prevented torture then I’d get a warm feeling which should count towards the total.
Neither is relevant in this case. My claim is that it’s not worth spending even a second of time, even a teensy bit of thought, on changing which kind of randomization you use.
Why? Exponential functions drop off really, really quickly. Really quickly. The proportion of of random bit strings that, when booted up, are minds in horrible agony drops roughly as the exponential of the complexity of the idea “minds in horrible agony.” It would look approximately like 2^-(complexity).
To turn this exponentially small chance into something I’d care about, we’d need the consequence to be of exponential magnitude. But it’s not. It’s just a regular number like 1 billion dollars or so. That’s 2^30. It’s nothing. You aren’t going to write a computer program that detects minds in horrible agony using 30 bits. You aren’t going to write one with 500 bits, either (concentration of one part in 10^-151). It’s simply not worth worrying about things that are worth less than 10^-140 cents.
I’m saying I don’t understand what you’re measuring. Does a world with a suffering simulation exist, given the OP’s scenario, or not?
If it does, then the proliferation of other worlds doesn’t matter unless they contain something that might offset the pain. If they’re morally neutral they can number Aleph-1 and it won’t make any difference.
Decision-making in many-worlds is exactly identical to ordinary decision-making. You weight the utility of possible outcomes by their measure, and add them up into an expected utility. The bad stuff in one of those outcomes only feels more important when you phrase it in terms of many-worlds, because a certainty of small bad stuff often feels worse than a chance of big bad stuff, even when the expected utility is the same.
The more competent AIs will be conquering the universe, so it’s value of the universe being optimized in each of the possible ways that’s playing against low measure.
If that’s what we’re worried about, then we might as well ask whether it’s risky to randomly program a classical computer and then run it.
My argument is about utility, but probability is low. On the other hand, with enough computational power a sufficiently clever evolutionary dynamic might well blow up the universe.