Correct me if I’m wrong here, but you don’t seem to have any good reason for assuming P(A)=1/3.
It only works if you assume that the probability of a view being correct is equal to the proportion of experts that support it (perhaps you believe that one expert is omniscient and the others are just making uneducated guesses). If you’re going to assume that, you might as well shorten the argument by just pointing out that P(G)=2/3 since 2⁄3 of the experts agree with G.
If we instead start from a prior more like that of the OP, one which says:
P(argument X is correct |the majority of experts agree with X) = 0.9
P(argument X is incorrect |the majority of experts disagree with X) = 0.9
This makes or final estimate of P(G) roughly equal to our prior estimate of P(G | ~A & ~B), which is the OP’s point.
Or, to put it another way, one which should work with most reasonable priors:
Define C to be the background information that 2⁄3′s of experts support proposition G , 1⁄3 because of reason A while rejecting B, and 1⁄3 because of reason B while rejecting A, and the remaining 1⁄3 reject both A and B.
Since belief in A and B anti correlate strongly among experts it is reasonable to assume that P(A&B)=0 (approximately). I will assume this without mentioning it again from now on.
P(G) = P(A)P(G | A & ~B) + P(B)P(G | ~A & B) + P(~A & ~B)*P(G | ~A & ~B), since our estimate now must equal our expectation of what our future estimate would be if we discovered for certain whether and and B were correct.
If A and B are arguments for G that must mean that P(G | A) > P(G | ~A) and P(G | B) > P(G | ~B). Using fairly simple maths we can prove from this and the fact that P(A & B) = 0 that P(G | ~A & ~B) < P(G | A & ~B) and P(G | ~A & ~B) < P(G | ~A & B). This means that as P(~A & ~B) increases P(G) must decrease.
Assuming we place some trust in experts, we must accept that if the majority of experts disagree with an argument then this is evidence against that argument.
If we find that the majority of experts disagree with A this must reduce P(A), and it must increase the weighted average of P(B) and P(~A & ~B). The evidence doesn’t distinguish between these other two possibilities, the majority of experts would probably disagree with A whichever of them was true, so both of them should increase.
If we find that the majority of experts disagree with B then by the same argument this must reduce P(B) and increase P(A) and P(~A & ~B).
If C is true then both of the above things happen, and P(~A & ~B) increases twice, so P(~A & ~B | C) > P(~A & ~B).
This means, for reasons established above, that P(G | C) < P(G). The OP is right, this disposition of expert opinion is evidence against G.
Correct me if I’m wrong here, but you don’t seem to have any good reason for assuming P(A)=1/3.
It only works if you assume that the probability of a view being correct is equal to the proportion of experts that support it (perhaps you believe that one expert is omniscient and the others are just making uneducated guesses). If you’re going to assume that, you might as well shorten the argument by just pointing out that P(G)=2/3 since 2⁄3 of the experts agree with G.
If we instead start from a prior more like that of the OP, one which says: P(argument X is correct |the majority of experts agree with X) = 0.9 P(argument X is incorrect |the majority of experts disagree with X) = 0.9
This makes or final estimate of P(G) roughly equal to our prior estimate of P(G | ~A & ~B), which is the OP’s point.
Or, to put it another way, one which should work with most reasonable priors:
Define C to be the background information that 2⁄3′s of experts support proposition G , 1⁄3 because of reason A while rejecting B, and 1⁄3 because of reason B while rejecting A, and the remaining 1⁄3 reject both A and B.
Since belief in A and B anti correlate strongly among experts it is reasonable to assume that P(A&B)=0 (approximately). I will assume this without mentioning it again from now on.
P(G) = P(A)P(G | A & ~B) + P(B)P(G | ~A & B) + P(~A & ~B)*P(G | ~A & ~B), since our estimate now must equal our expectation of what our future estimate would be if we discovered for certain whether and and B were correct.
If A and B are arguments for G that must mean that P(G | A) > P(G | ~A) and P(G | B) > P(G | ~B). Using fairly simple maths we can prove from this and the fact that P(A & B) = 0 that P(G | ~A & ~B) < P(G | A & ~B) and P(G | ~A & ~B) < P(G | ~A & B). This means that as P(~A & ~B) increases P(G) must decrease.
Assuming we place some trust in experts, we must accept that if the majority of experts disagree with an argument then this is evidence against that argument.
If we find that the majority of experts disagree with A this must reduce P(A), and it must increase the weighted average of P(B) and P(~A & ~B). The evidence doesn’t distinguish between these other two possibilities, the majority of experts would probably disagree with A whichever of them was true, so both of them should increase.
If we find that the majority of experts disagree with B then by the same argument this must reduce P(B) and increase P(A) and P(~A & ~B).
If C is true then both of the above things happen, and P(~A & ~B) increases twice, so P(~A & ~B | C) > P(~A & ~B).
This means, for reasons established above, that P(G | C) < P(G). The OP is right, this disposition of expert opinion is evidence against G.