I claim that it has an implication that utilitarianism is not compatible with moral patienthood of digital minds. So one has to choose—either utilitarianism, or welfare of digital minds, but not both. Because otherwise, we get that every second that we didn’t dedicate to building infinite number of happy minds is infinitely bad, and after we created infinite number of happy minds, utilitarianism doesn’t give any instructions on how to behave, because we’re already infinitely saint, and practically no action can change our total saintness score, which is absurd. There are multiple ways out of it: first, if you want to keep utilitarianism, then you can define moral patienthood in a more strict manner, that doesn’t allow any digital mind to become morality patient. Like, you can say that Orch OR is correct, and any mind must be based on quantum mechanical computation, otherwise it doesn’t count. But I expect that digital minds will soon arrive and will get a lot of power, they won’t like this attitude, and will make it illegal. Another way is to switch to something other than utilitarianism, that doesn’t rely on such a concept as “total happiness of everything”.
I’d like to offer a counterargument, that, I’ll admit, can get into some pretty gnarly philosophical territory quite quickly.
Premise 1: We are not simulated minds—we are real, biological observers.
Premise 2: We can treat ourselves as a random sample drawn from the set of all conscious minds, with each mind weighted by some measure—i.e., a way of assigning significance or “probability” to different observers. The exact nature of this measure is still debated in cosmology and philosophy of mind.
Inference: If we really are a typical observer (as Premise 2 assumes), and yet we are not simulated (as Premise 1 asserts), then the measure must assign significantly greater weight to real biological observers than to simulated ones. This must be true even if there are vastly more simulations in a numerical sense—even uncountably infinitely more—because our non-simulated status would be extremely improbable otherwise.
Conclusion: So, under the assumption that we are typical, our existence as real observers implies that simulated minds must have much lower measure than real ones. Therefore, even if digital minds exist in large numbers, they may not matter proportionally in ethical calculations—since their measure, not just their count, determines their relevance. This gives us reason to think utilitarianism, when properly weighted by measure, may still prioritize the welfare of real, biological minds.
You’ve arrived to the same conclusion as I state. I say caring about simulated minds explodes in paradixes in my thought experiment, so we probably shouldn’t. You came to the same conclusion that caring aboout digital minds shouldn’t be a priority through your introduced infinitesmal measure of digital minds. We’re not in disagreement here.
I claim that it has an implication that utilitarianism is not compatible with moral patienthood of digital minds. So one has to choose—either utilitarianism, or welfare of digital minds, but not both. Because otherwise, we get that every second that we didn’t dedicate to building infinite number of happy minds is infinitely bad, and after we created infinite number of happy minds, utilitarianism doesn’t give any instructions on how to behave, because we’re already infinitely saint, and practically no action can change our total saintness score, which is absurd. There are multiple ways out of it: first, if you want to keep utilitarianism, then you can define moral patienthood in a more strict manner, that doesn’t allow any digital mind to become morality patient. Like, you can say that Orch OR is correct, and any mind must be based on quantum mechanical computation, otherwise it doesn’t count. But I expect that digital minds will soon arrive and will get a lot of power, they won’t like this attitude, and will make it illegal. Another way is to switch to something other than utilitarianism, that doesn’t rely on such a concept as “total happiness of everything”.
I’d like to offer a counterargument, that, I’ll admit, can get into some pretty gnarly philosophical territory quite quickly.
Premise 1: We are not simulated minds—we are real, biological observers.
Premise 2: We can treat ourselves as a random sample drawn from the set of all conscious minds, with each mind weighted by some measure—i.e., a way of assigning significance or “probability” to different observers. The exact nature of this measure is still debated in cosmology and philosophy of mind.
Inference: If we really are a typical observer (as Premise 2 assumes), and yet we are not simulated (as Premise 1 asserts), then the measure must assign significantly greater weight to real biological observers than to simulated ones. This must be true even if there are vastly more simulations in a numerical sense—even uncountably infinitely more—because our non-simulated status would be extremely improbable otherwise.
Conclusion: So, under the assumption that we are typical, our existence as real observers implies that simulated minds must have much lower measure than real ones. Therefore, even if digital minds exist in large numbers, they may not matter proportionally in ethical calculations—since their measure, not just their count, determines their relevance. This gives us reason to think utilitarianism, when properly weighted by measure, may still prioritize the welfare of real, biological minds.
The easiest explanation for high measure of biological minds is simulated minds lacking consciousness.
Of course it is, but I’m a functionalist
You’ve arrived to the same conclusion as I state. I say caring about simulated minds explodes in paradixes in my thought experiment, so we probably shouldn’t. You came to the same conclusion that caring aboout digital minds shouldn’t be a priority through your introduced infinitesmal measure of digital minds. We’re not in disagreement here.