Under UDASSA, if there are N observers (or observer moments), then it takes an average of log N + C bits to localise any one of them. C here is an overhead representing the length of program needed to identify the observers or observer moments. (Think of the program as creating a list of N candidates, and then a further log N bits are needed to pick one item from the list.)
So this favours hypotheses making N small—which gives a Doomsday argument.
The extra complication is that there may be less overhead when using different programs to identify different sorts of observer moment (corresponding to different reference classes). Say there are M observers in “our” reference class, and “our” reference class is identified by a program of length C-O.
Then provided C-O + log M < C + log N, UDASSA will now favour hypotheses which make M as small as possible: it says nothing about the size of N.
You still get a doomsday argument in that “observers like us” are probably doomed, but it is less exciting, because of the possibility of “observers like us” evolving into an entirely different reference class.
But the additional observers exactly cancel the extra data needed to identify a specific one, no? The length of the program that identifies the class of people alive in 2013 is the same however many people are alive in 2013. So the size of N is irrelevant and we expect to find ourselves in classes for which C is small.
But the additional observers exactly cancel the extra data needed to identify a specific one, no?
That would be true in an SIA approach (probability of a hypothesis is scaled upwards in proportion to number of observers). It’s not true in an SSA approach (there is no upscaling to counter the additional complexity penalty of locating an observer out of N observers). This is why SSA tends to favour small N (or small M for a specific reference class).
Under UDASSA, if there are N observers (or observer moments), then it takes an average of log N + C bits to localise any one of them. C here is an overhead representing the length of program needed to identify the observers or observer moments. (Think of the program as creating a list of N candidates, and then a further log N bits are needed to pick one item from the list.)
So this favours hypotheses making N small—which gives a Doomsday argument.
The extra complication is that there may be less overhead when using different programs to identify different sorts of observer moment (corresponding to different reference classes). Say there are M observers in “our” reference class, and “our” reference class is identified by a program of length C-O.
Then provided C-O + log M < C + log N, UDASSA will now favour hypotheses which make M as small as possible: it says nothing about the size of N.
You still get a doomsday argument in that “observers like us” are probably doomed, but it is less exciting, because of the possibility of “observers like us” evolving into an entirely different reference class.
But the additional observers exactly cancel the extra data needed to identify a specific one, no? The length of the program that identifies the class of people alive in 2013 is the same however many people are alive in 2013. So the size of N is irrelevant and we expect to find ourselves in classes for which C is small.
That would be true in an SIA approach (probability of a hypothesis is scaled upwards in proportion to number of observers). It’s not true in an SSA approach (there is no upscaling to counter the additional complexity penalty of locating an observer out of N observers). This is why SSA tends to favour small N (or small M for a specific reference class).