I’d like to offer a counterargument, that, I’ll admit, can get into some pretty gnarly philosophical territory quite quickly.
Premise 1: We are not simulated minds—we are real, biological observers.
Premise 2: We can treat ourselves as a random sample drawn from the set of all conscious minds, with each mind weighted by some measure—i.e., a way of assigning significance or “probability” to different observers. The exact nature of this measure is still debated in cosmology and philosophy of mind.
Inference: If we really are a typical observer (as Premise 2 assumes), and yet we are not simulated (as Premise 1 asserts), then the measure must assign significantly greater weight to real biological observers than to simulated ones. This must be true even if there are vastly more simulations in a numerical sense—even uncountably infinitely more—because our non-simulated status would be extremely improbable otherwise.
Conclusion: So, under the assumption that we are typical, our existence as real observers implies that simulated minds must have much lower measure than real ones. Therefore, even if digital minds exist in large numbers, they may not matter proportionally in ethical calculations—since their measure, not just their count, determines their relevance. This gives us reason to think utilitarianism, when properly weighted by measure, may still prioritize the welfare of real, biological minds.
You’ve arrived to the same conclusion as I state. I say caring about simulated minds explodes in paradixes in my thought experiment, so we probably shouldn’t. You came to the same conclusion that caring aboout digital minds shouldn’t be a priority through your introduced infinitesmal measure of digital minds. We’re not in disagreement here.
I’d like to offer a counterargument, that, I’ll admit, can get into some pretty gnarly philosophical territory quite quickly.
Premise 1: We are not simulated minds—we are real, biological observers.
Premise 2: We can treat ourselves as a random sample drawn from the set of all conscious minds, with each mind weighted by some measure—i.e., a way of assigning significance or “probability” to different observers. The exact nature of this measure is still debated in cosmology and philosophy of mind.
Inference: If we really are a typical observer (as Premise 2 assumes), and yet we are not simulated (as Premise 1 asserts), then the measure must assign significantly greater weight to real biological observers than to simulated ones. This must be true even if there are vastly more simulations in a numerical sense—even uncountably infinitely more—because our non-simulated status would be extremely improbable otherwise.
Conclusion: So, under the assumption that we are typical, our existence as real observers implies that simulated minds must have much lower measure than real ones. Therefore, even if digital minds exist in large numbers, they may not matter proportionally in ethical calculations—since their measure, not just their count, determines their relevance. This gives us reason to think utilitarianism, when properly weighted by measure, may still prioritize the welfare of real, biological minds.
The easiest explanation for high measure of biological minds is simulated minds lacking consciousness.
Of course it is, but I’m a functionalist
You’ve arrived to the same conclusion as I state. I say caring about simulated minds explodes in paradixes in my thought experiment, so we probably shouldn’t. You came to the same conclusion that caring aboout digital minds shouldn’t be a priority through your introduced infinitesmal measure of digital minds. We’re not in disagreement here.