Yep, sorry, I saw −3, −2, −1, etc… and concluded you weren’t doing the 2 jumps; my bad!

Then somehow the work is just postponed to the point where we try to combine partial preferences?

Yes. But unless we have other partial preferences or meta-preferences, then the only resonable way of combining them is just to add them, after weighting.

I like your reciprocal weighting formula. It seems to have good properties.

If we set aside infinity, which I don’t know how to deal with, then the SIA answer does not depend on utility bounds—unlike my anthropic decision theory post.

Q1: “How many copies of people (currently) like me are there in each universe?” is well-defined in all finite settings, even huge ones.

No, I mean not many, as compared with how many there are in universes 1 and 2. Other observers are not relevant to Q1.

I’ll reiterate my claim that different anthropic probability theories are “correct answers to different questions”: https://www.lesswrong.com/posts/nxRjC93AmsFkfDYQj/anthropic-probabilities-answering-different-questions