# In SIA, reference classes (almost) don’t matter

This is another write-up of a fact that is generally known, but that I haven’t seen proven explicitly: the fact that SIA does not depend upon the reference class.

Specifically:

Assume there are a finite number of possible universes . Let be a reference class of finitely many agents in those universes, and assume you are in . Let be the reference class of agents subjectively indistinguishable from you. Then SIA using is independent of as long as .

Proof:

Let be a set of universes for some indexing set , and a probability distribution over them. For a universe , let be the number of agents in the reference class in .

Then if is the probability distribution from SIA using :

.

We now wish to update on our own subjective experience . Since there are agents in our reference class, and have subjectively indistinguishable experiences, this updates the probabilities by weights , which is just . After normalising, this is:

Thus this expression is independent of .

Given some measure theory (and measure theoretic restrictions on to make sure expressions like converge), the result extends to infinite classes of universes, with in the proof instead of .

When you calculate pR(Ui|sub), you perform the following transformation pR(Ui)→pR(Ui)×R0(Ui)/R(Ui), but an R(Ui) seems to go missing. Can anyone explain?

Where does the R(Ui) go missing? It’s there in the subsequent equation.

pR(Ui) already had an R(Ui), then you divided by it, but the original factor disappears so you are left with a divided by R(Ui). But I don’t see where the original factor of R(Ui) went, which would have resulted in cancelling.

You are correct, I dropped a R(Ui) in the proof, thanks! Put it back in, and the proof is now shorter.

As I understand from above, in SIA the real reference class is “the class of observers who is subjectively indistinguishable from me” - and that is why SIA doesn’t depend on any other reference classes which I could be a member. However, it doesn’t exclude the use of SSA logic for SSA-related conclusions.

An example of SSA logic: I am a member of a class of people who was born between equator and a pole of Earth, and by the fact of my birth I was randomly selected from this class. Thus, the place of my birth should be rather randomly (but accounting for different population densities) selected between equator and pole, and unlikely to be exactly on the equator or on the pole. I was born at 55 latitude, so SSA logic work in predicting my latitude of birth.

I could be a member of many different SSA-classes and for each of them make independent predictions about my position in them.

For SIA the class of “subjectively indistinguishable” my copies could be also not very exact. Different interpretation of such class is:

1) everybody is me who have the same thought process as me now. There could be a lot of them, even on Earth.

2) everybody, who has the total sum of all visual (and other) experiences as me, even despite the fact that I will not be able to account for all differences as they are too small to account.

3) everybody who has exactly the same brain as me. This class hundred orders of magnitude more rare than (2), as the same experience could be generated by different brains.

I think that “true” SIA class is somewhere between (1) and (2) - or more likely, there is no “true SIA class”, the same way as there is no true SSA-class, and different types of SIA could be used to answer different questions.

Yep. ^_^

This result seems strange to me, even though the maths seems to check out. Is there a conceptual explanation of why this should be the case?

Maybe: larger reference classes make the universes more likely, but make it less likely that you would be a specific member of that reference class, so when you update on who you are in the class, the two effects cancel out.

More conceptually: in SAI, the definition of reference class commutes with restrictions on that reference class. So it doesn’t matter if you take the reference class of all humans, then specialise to the ones alive today, then specialise to you; or take the reference class of all humans alive today, then specialise to you; or just take the reference class of you. SIA is, in a sense, sensible with respect to updating.

Does that help?

Thanks, that’s helpful. Actually, now that you’ve put it that way, I recall having known this fact at some point in the past.

Another way of seeing SAI + update on yourself: weigh each universe by the expected number of exact (subjective) copies of you in them, then renormalise.

Yes, but note that SSA can get this same result. All they have to do is say that their reference class is R—whatever set the SIA person uses, they use the same set. If they make this move, then they are reference-class-independent to exactly the same degree as SIA.

SSA is not reference class independent. If it uses R, then the SSA prob is P(Ui|sub) (rather that PR(Ui|sub)), which is (P(Ui)R0(Ui)/R(Ui))/(∑j∈IP(Uj)R0(Uj)/R(Uj)), which is not independent of R (consider doubling the size of R in one world only—that makes that world less likely relative to all the others).

Ah, my mistake, sorry. I was thinking of a different definition of reference-class-independent than you were; I should have read more closely.

Oh, what definition were you using? Anything interesting? (or do you mean

beforeupdating on your own experiences?)Sometimes when people say SIA is reference-class independent & SSA isn’t, they mean it as an argument that SIA is better than SSA, because it is philosophically less problematic: The choice of reference class is arbitrary, so if we don’t have to make that choice, our theory is overall more elegant. This was the sort of thing I had in mind.

On that definition, SSA is only more arbitrary than SIA if it makes the reference class different from the class of all observers. (Which some proponents of SSA have done) SIA has a concept of observer too, at least, a concept of observer-indistinguishable-from-me (which presumably is proper subset of observer, though now that I think about it this might be challenged. Maybe I was doubly wrong—maybe SIA only needs the concept of observer-indistinguishable-from-me).