I think I have an answer, but it requires repetition and the ability to gather information to be useful, and the problem above rules that out. I do see a way to get around that, but there is also a possible flaw in that as well. I’ll post my current thoughts.
Essentially, have Alice track information about how accurate she is in circumstances where she thinks her priors are right, and another person think their priors are right, and no agreement can be made easily.
If Alice for instance finds that the last 10 times she has had an argument with someone else that appeared to be from different priors, and she later found out that she was only incorrect one of those times, then using that information, she should probably go with her own set of priors.
Alternatively, if Alice finds that the last 10 times she has had an argument with someone else that appeared to be from different priors, that she was only correct one of those times, then using that information, she should probably go with her interpretation of Bob’s set of priors.
The problem is, In actual arguments, there doesn’t seem to be a good way of accounting for this. I can think of a few cases where I did change my mind on priors after rereviewing them later, but I can’t remember as clearly ever changing anyone elses, (but I have to bear in mind, I don’t have access to their brains.)
Furthermore, there is the bizarre case of “I think I’m usually wrong. But sometimes, other people seem to trust my judgement and believe I’m correct more often then not.” I have no idea how to process that thought using the above logic.
If Alice thinks that the other times she disagreed with people are representative of her current disagreement, I don’t see why she shouldn’t update according to the normal Bayesian rules.
I think I have an answer, but it requires repetition and the ability to gather information to be useful, and the problem above rules that out. I do see a way to get around that, but there is also a possible flaw in that as well. I’ll post my current thoughts.
Essentially, have Alice track information about how accurate she is in circumstances where she thinks her priors are right, and another person think their priors are right, and no agreement can be made easily.
If Alice for instance finds that the last 10 times she has had an argument with someone else that appeared to be from different priors, and she later found out that she was only incorrect one of those times, then using that information, she should probably go with her own set of priors.
Alternatively, if Alice finds that the last 10 times she has had an argument with someone else that appeared to be from different priors, that she was only correct one of those times, then using that information, she should probably go with her interpretation of Bob’s set of priors.
The problem is, In actual arguments, there doesn’t seem to be a good way of accounting for this. I can think of a few cases where I did change my mind on priors after rereviewing them later, but I can’t remember as clearly ever changing anyone elses, (but I have to bear in mind, I don’t have access to their brains.)
Furthermore, there is the bizarre case of “I think I’m usually wrong. But sometimes, other people seem to trust my judgement and believe I’m correct more often then not.” I have no idea how to process that thought using the above logic.
If Alice thinks that the other times she disagreed with people are representative of her current disagreement, I don’t see why she shouldn’t update according to the normal Bayesian rules.