What you’ve done is constructed an analogy that looks like this:
Generation of 10^(3W) math problems <---> Generation of 10^(3W) people
Funny set of rules A whereby an observer is assigned a problem <---> SSA
Funny set of rules B whereby an observer is assigned a problem <---> SIA
Probability that the observer is looking at problem X <---> Anthropic probability of being person X
But whereas “the probability that the observer is looking at problem X” depends on whether we arbitrarily choose rules A or B, the anthropic probability of being person X is supposed (by those who believe anthropic probabilities exist) to be a determinate matter. It’s not supposed to be a mere convention that we choose SSA or SIA, it’s supposed to be that one is ‘correct’ and the other ‘wrong’ (or both are wrong and something else is correct).
If we only consider non-anthropic problems then we can resolve everything satisfactorily by choosing ‘rules’ like A or B (and note that unless we add an observer and choose rules, there won’t be any questions to resolve) but that won’t tell us anything about SSA and SIA. (This is a clearer explanation than I gave in my first comment of what I think ‘doesn’t make sense’ about your approach.)
What you’ve done is constructed an analogy that looks like this:
Generation of 10^(3W) math problems <---> Generation of 10^(3W) people
Funny set of rules A whereby an observer is assigned a problem <---> SSA
Funny set of rules B whereby an observer is assigned a problem <---> SIA
Probability that the observer is looking at problem X <---> Anthropic probability of being person X
But whereas “the probability that the observer is looking at problem X” depends on whether we arbitrarily choose rules A or B, the anthropic probability of being person X is supposed (by those who believe anthropic probabilities exist) to be a determinate matter. It’s not supposed to be a mere convention that we choose SSA or SIA, it’s supposed to be that one is ‘correct’ and the other ‘wrong’ (or both are wrong and something else is correct).
If we only consider non-anthropic problems then we can resolve everything satisfactorily by choosing ‘rules’ like A or B (and note that unless we add an observer and choose rules, there won’t be any questions to resolve) but that won’t tell us anything about SSA and SIA. (This is a clearer explanation than I gave in my first comment of what I think ‘doesn’t make sense’ about your approach.)
It makes sense to look at it that way, yes.
I do think that something like A or B should be able to accurately be said to be true of the world, though.