I’d like to offer a counterargument, that, I’ll admit, can get into some pretty gnarly philosophical territory quite quickly.
Premise 1: We are not simulated minds—we are real, biological observers.
Premise 2: We can treat ourselves as a random sample drawn from the set of all conscious minds, with each mind weighted by some measure—i.e., a way of assigning significance or “probability” to different observers. The exact nature of this measure is still debated in cosmology and philosophy of mind.
Inference: If we really are a typical observer (as Premise 2 assumes), and yet we are not simulated (as Premise 1 asserts), then the measure must assign significantly greater weight to real biological observers than to simulated ones. This must be true even if there are vastly more simulations in a numerical sense—even uncountably infinitely more—because our non-simulated status would be extremely improbable otherwise.
Conclusion: So, under the assumption that we are typical, our existence as real observers implies that simulated minds must have much lower measure than real ones. Therefore, even if digital minds exist in large numbers, they may not matter proportionally in ethical calculations—since their measure, not just their count, determines their relevance. This gives us reason to think utilitarianism, when properly weighted by measure, may still prioritize the welfare of real, biological minds.
Oh look, it seems that biological systems are continuous in nature, while simulated ones are discrete. This makes a huge difference: 1. the total prolongation of change in a continuous system is like the measure of real numbers, while the prolongation of change in a discrete systems is like a measure of whole numbers—it is infinitely smaller. In this perspective, the prolongation of pain of a simulated mind is zero seconds, while the prolongation of pain in continuous system can be measured in seconds. 2. The claim of my post falls apart for continuous systems—you can’t just run infinitely many distinct continuous minds inside one physical system, because now you can’t leverage digital deduplication on hardware level. 3. Some other paradoxes gets eliminated with this distinction. For example, with digital minds we can just run experience replay to generate infinite loop of happiness. But with continuous minds making the save and load functionality is much more problematic.
You’ve arrived to the same conclusion as I state. I say caring about simulated minds explodes in paradixes in my thought experiment, so we probably shouldn’t. You came to the same conclusion that caring aboout digital minds shouldn’t be a priority through your introduced infinitesmal measure of digital minds. We’re not in disagreement here.
I’d like to offer a counterargument, that, I’ll admit, can get into some pretty gnarly philosophical territory quite quickly.
Premise 1: We are not simulated minds—we are real, biological observers.
Premise 2: We can treat ourselves as a random sample drawn from the set of all conscious minds, with each mind weighted by some measure—i.e., a way of assigning significance or “probability” to different observers. The exact nature of this measure is still debated in cosmology and philosophy of mind.
Inference: If we really are a typical observer (as Premise 2 assumes), and yet we are not simulated (as Premise 1 asserts), then the measure must assign significantly greater weight to real biological observers than to simulated ones. This must be true even if there are vastly more simulations in a numerical sense—even uncountably infinitely more—because our non-simulated status would be extremely improbable otherwise.
Conclusion: So, under the assumption that we are typical, our existence as real observers implies that simulated minds must have much lower measure than real ones. Therefore, even if digital minds exist in large numbers, they may not matter proportionally in ethical calculations—since their measure, not just their count, determines their relevance. This gives us reason to think utilitarianism, when properly weighted by measure, may still prioritize the welfare of real, biological minds.
The easiest explanation for high measure of biological minds is simulated minds lacking consciousness.
Of course it is, but I’m a functionalist
Oh look, it seems that biological systems are continuous in nature, while simulated ones are discrete. This makes a huge difference: 1. the total prolongation of change in a continuous system is like the measure of real numbers, while the prolongation of change in a discrete systems is like a measure of whole numbers—it is infinitely smaller. In this perspective, the prolongation of pain of a simulated mind is zero seconds, while the prolongation of pain in continuous system can be measured in seconds. 2. The claim of my post falls apart for continuous systems—you can’t just run infinitely many distinct continuous minds inside one physical system, because now you can’t leverage digital deduplication on hardware level. 3. Some other paradoxes gets eliminated with this distinction. For example, with digital minds we can just run experience replay to generate infinite loop of happiness. But with continuous minds making the save and load functionality is much more problematic.
You’ve arrived to the same conclusion as I state. I say caring about simulated minds explodes in paradixes in my thought experiment, so we probably shouldn’t. You came to the same conclusion that caring aboout digital minds shouldn’t be a priority through your introduced infinitesmal measure of digital minds. We’re not in disagreement here.