That kind of anthropic reasoning is only useful in the context of comparing hypotheses, Bayesian style. Conditional probabilities matter only if they are different given different models.
For most possible models of physics, e.g. X and Y, P(Finn|X) = P(Finn|Y). Thus, that particular piece of info is not very useful for distinguishing models for physics.
OTOH, P(21st century|X) may be >> P(21st century|Y). So anthropic reasoning is useful in that case.
As for the reference class, “people asking these kinds of questions” is probably the best choice. Thus I wouldn’t put any stock in the idea that animals aren’t conscious.
That kind of anthropic reasoning is only useful in the context of comparing hypotheses, Bayesian style. Conditional probabilities matter only if they are different given different models.
For most possible models of physics, e.g. X and Y, P(Finn|X) = P(Finn|Y). Thus, that particular piece of info is not very useful for distinguishing models for physics.
OTOH, P(21st century|X) may be >> P(21st century|Y). So anthropic reasoning is useful in that case.
As for the reference class, “people asking these kinds of questions” is probably the best choice. Thus I wouldn’t put any stock in the idea that animals aren’t conscious.