But the additional observers exactly cancel the extra data needed to identify a specific one, no? The length of the program that identifies the class of people alive in 2013 is the same however many people are alive in 2013. So the size of N is irrelevant and we expect to find ourselves in classes for which C is small.
But the additional observers exactly cancel the extra data needed to identify a specific one, no?
That would be true in an SIA approach (probability of a hypothesis is scaled upwards in proportion to number of observers). It’s not true in an SSA approach (there is no upscaling to counter the additional complexity penalty of locating an observer out of N observers). This is why SSA tends to favour small N (or small M for a specific reference class).
But the additional observers exactly cancel the extra data needed to identify a specific one, no? The length of the program that identifies the class of people alive in 2013 is the same however many people are alive in 2013. So the size of N is irrelevant and we expect to find ourselves in classes for which C is small.
That would be true in an SIA approach (probability of a hypothesis is scaled upwards in proportion to number of observers). It’s not true in an SSA approach (there is no upscaling to counter the additional complexity penalty of locating an observer out of N observers). This is why SSA tends to favour small N (or small M for a specific reference class).