You’re correct that this is what happens at one of the abstraction layers. But the choice of that layer is pretty arbitrary. By abstraction layers:
L1: hypervisor interface: uncountably many VMs
L2: hypervisor implementation: countably many VMs
L3: semiconductors: no VMs, only high and low signals
L4: electrons: no high and low signals, only electromagnetic fields
So yes, on L2 the number of VMs is finite. But why morality should count what happens on L2 and not on L1 or L3, L4? This is too arbitrary.
Oh look, it seems that biological systems are continuous in nature, while simulated ones are discrete. This makes a huge difference: 1. the total prolongation of change in a continuous system is like the measure of real numbers, while the prolongation of change in a discrete systems is like a measure of whole numbers—it is infinitely smaller. In this perspective, the prolongation of pain of a simulated mind is zero seconds, while the prolongation of pain in continuous system can be measured in seconds. 2. The claim of my post falls apart for continuous systems—you can’t just run infinitely many distinct continuous minds inside one physical system, because now you can’t leverage digital deduplication on hardware level. 3. Some other paradoxes gets eliminated with this distinction. For example, with digital minds we can just run experience replay to generate infinite loop of happiness. But with continuous minds making the save and load functionality is much more problematic.