Yeah, I think this is right. It seems like the whole problem arises from ignoring the copies of you who see “X is false.” If your prior on X is 0.5, then really the behavior of the clones that see “X is false” should be exactly analogous to yours, and if you’re going to be a clone-altruist you should care about all the clones of you whose behavior and outcomes you can easily predict.
I should also point out that this whole setup assumes that there are 0.99N clones who see one calculator output and 0.01N clones who see the opposite, but that’s really going to depend on what exact type of multiverse you’re considering (quantum vs. inflationary vs. something else) and what type of randomness is injected into the calculator (classical or quantum). But if you include both the “X is true” and “X is false” copies then I think it ends up not mattering.
Thank you for bringing attention to this issue— I think it’s an under-appreciated problem. I agree with you that the “force” measure is untenable, and the “pattern” view, while better, probably can’t work either.
Count-based measures seem to fail because they rely on drawing hard boundaries between minds. Also, there are going to be cases where it’s not even clear whether a system counts as a mind or not, and if we take the “count” view we will probably be forced to make definitive decisions in those cases.
Mass/energy-based measures seem better because they allow you to treat anthropic measure as the continuous variable that it is, but I also don’t think they can be the answer. In particular, they seem to imply that more efficient implementations of a mind (in terms of component size or power consumption or whatever) would have lower measure than less efficient ones, even if they have all the same experiences.
This is debatable, but it strikes me that anthropic measure and “degree of consciousness” are closely related concepts. Fundamentally, for a system to have any anthropic measure at all, it needs to be able to count as an “observer” or an “experiencer” which seems pretty close to saying that it’s conscious on some level.
If we equate consciousness with a kind of information processing, then anthropic measure could be a function of “information throughput” or something like that. If a System A can “process” more bits of information per unit time than System B, then it can have more experiences than System B, and arguably should be given more anthropic measure. In other words, if you identify “yourself” with the set of experiences you’re having in a given moment, then it’s more likely that those experiences are being realized in a system with more computing power, more ability to have more experiences, than a system with less compute. Note that, on this view, the information that’s being processed doesn’t have to be compressed/deduplicated in any way; systems running the same computation on many threads in parallel would still have more measure than single-threaded systems ceteris paribus.
There’s a lot that needs to be fleshed out with this “computational theory of anthropic measure” but it seems like the truth has to be something in this general direction.