I wrote the following comment over there which seemed to be caught in the spam filter or something:
Both Sue and the third party are rational, and them knowing all objective facts about everyone’s experiences does not eliminate the disagreement.
The reason is because the evidence is anthropic in nature—it is more likely under certain hypotheses that affect the probability of “you” existing or “you” having certain experiences, above and beyond objective facts. Such evidence is agent-centered.
For example, Sue’s evidence raises the probability of the hypothesis “God exists and cares about me in particular” for her, but not for the third party. Of course, the third party’s probability of “God cares about Sue in particular” goes up. But that has a lower prior probability when it’s about someone else, because that hypothesis also predicts that “I” will “be Sue” more than the baseline expectation of 1 in 7 billion or so.
In general, the class of hypotheses that Sue’s evidence favors also tends to make Sue’s existence and sentience more likely. Since Sue knows that she exists and is sentient but does not know that about anyone else, she starts with a higher prior probability in that class of hypotheses, and therefore the same update as third parties will result in a higher posterior probability for that class.
This also means that Sue’s family and barista rationally conclude somewhat weaker versions of the class of hypotheses—their priors should be higher than a random third party, but lower than Sue’s priors.
But what is your watching friend supposed to think? Though his predicament is perfectly predictable to you—that is, you expected before starting the experiment to see his confusion—from his perspective it is just a pure 100% unexplained miracle. What you have reason to believe and what he has reason to believe would now seem separated by an uncrossable gap, which no amount of explanation can bridge. This is the main plausible exception I know to Aumann’s Agreement Theorem.
Pity those poor folk who actually win the lottery! If the hypothesis “this world is a holodeck” is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck. (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)
Your example with Sue is the same, just at a smaller scale with less evidence and therefore less strong conclusions.
Re your ethics example, you’re assuming that knowledge of others’ intuitions counts as moral evidence. Even if that were the case, knowledge of a single person’s intuition is plausibly not enough to shift from uncertain to confident in one position, or vice versa.
Hmm… I suppose the psychic abilities hypothesis might be indirectly tied to anthropic considerations, but it feel to me as reading too much into the particular example chosen. Or maybe not, maybe almost any hypothesis has anthropic effects when it comes down to it, but I think the idea was to try to isolate one particular effect.
I think that the only rational reason to treat your own experience as more significant, given the constraints of the problem, is the anthropic nature of the evidence. And that explains nicely why others shouldn’t update as much.
If you think there’s an example that isolates non-anthropic effects that behave similarly, perhaps. I’ll reserve judgement until I see such an example—for now, I don’t know of any.
I wrote the following comment over there which seemed to be caught in the spam filter or something:
Both Sue and the third party are rational, and them knowing all objective facts about everyone’s experiences does not eliminate the disagreement.
The reason is because the evidence is anthropic in nature—it is more likely under certain hypotheses that affect the probability of “you” existing or “you” having certain experiences, above and beyond objective facts. Such evidence is agent-centered.
For example, Sue’s evidence raises the probability of the hypothesis “God exists and cares about me in particular” for her, but not for the third party. Of course, the third party’s probability of “God cares about Sue in particular” goes up. But that has a lower prior probability when it’s about someone else, because that hypothesis also predicts that “I” will “be Sue” more than the baseline expectation of 1 in 7 billion or so.
In general, the class of hypotheses that Sue’s evidence favors also tends to make Sue’s existence and sentience more likely. Since Sue knows that she exists and is sentient but does not know that about anyone else, she starts with a higher prior probability in that class of hypotheses, and therefore the same update as third parties will result in a higher posterior probability for that class.
This also means that Sue’s family and barista rationally conclude somewhat weaker versions of the class of hypotheses—their priors should be higher than a random third party, but lower than Sue’s priors.
This is analogous to the argument with lottery winners here: https://www.lesswrong.com/posts/kKAmxmQq9umJiMFSp/real-life-anthropic-weirdness
Your example with Sue is the same, just at a smaller scale with less evidence and therefore less strong conclusions.
Re your ethics example, you’re assuming that knowledge of others’ intuitions counts as moral evidence. Even if that were the case, knowledge of a single person’s intuition is plausibly not enough to shift from uncertain to confident in one position, or vice versa.
Hmm… I suppose the psychic abilities hypothesis might be indirectly tied to anthropic considerations, but it feel to me as reading too much into the particular example chosen. Or maybe not, maybe almost any hypothesis has anthropic effects when it comes down to it, but I think the idea was to try to isolate one particular effect.
I think that the only rational reason to treat your own experience as more significant, given the constraints of the problem, is the anthropic nature of the evidence. And that explains nicely why others shouldn’t update as much.
If you think there’s an example that isolates non-anthropic effects that behave similarly, perhaps. I’ll reserve judgement until I see such an example—for now, I don’t know of any.