The notion of evidence as rationalists use it is very expansive. B is evidnece for A if and only if P(A|B)>P(A). Or equivalently, if and only if P(A|¬B)<P(A). If people never got bribed (¬B), that sure seems to me like it would lower the probability of a deep state (A). Hence P(A|¬B)<P(A) and therefore P(A|B)>P(A), which means people getting bribed is, in fact, evidence for a deep state.
(This is the same with people saying “correlation isn’t evidence of causation”. Incorrect; correlation is always evidence for causation, because lack of correlation is evidence against causation.)
Again, it’s the magnitude of the evidence that you can dispute. If you think people will get bribed regardless, then people getting bribed is only very weak evidence for a deep state. But it’s still evidence. Just like seeing bank robbers is evidence for a large society of bank robbers. (Because seeing no bank robbers would be evidence against such a society.)
It feels to me that “evidence of X” as colloquially used might be a stronger phrase than “evidence for X”, and almost as strong as “proof of X”. So maybe correlation is evidence for causation, but isn’t evidence of causation :-)
If something would be equally certain in two cases it can fundamentally constitute no evidence at all, even in the Bayesian sense. Suppose the three alternatives are Honest administrators, business as Usual, and Deep state. We observe some Bribes. Then P(B|U)=P(B|D)=1 and P(B|H)=0. Given appropriate priors then, P(B)=P(U)+P(D). And therefore
P(D|B)=P(D)P(D)+P(U)
We haven’t really updated at all our odds of Deep state being the explanation instead of business as Usual. We have ruled out the Honest hypothesis but I didn’t have many illusions about that. It’s a fairly extreme perspective—you could get a tiny update if you posited that P(B|U) is slightly lower than 1 or whatever—but I think for all practical matters it does round up to this calculation.
Yes, if you assume that the probability of seeing an observation was 100% under your favorite model then seeing it doesn’t update you away from that model, but that assumption is obviously not true. (And I already conceded that the update is marginal!)
The notion of evidence as rationalists use it is very expansive. B is evidnece for A if and only if P(A|B)>P(A). Or equivalently, if and only if P(A|¬B)<P(A). If people never got bribed (¬B), that sure seems to me like it would lower the probability of a deep state (A). Hence P(A|¬B)<P(A) and therefore P(A|B)>P(A), which means people getting bribed is, in fact, evidence for a deep state.
(This is the same with people saying “correlation isn’t evidence of causation”. Incorrect; correlation is always evidence for causation, because lack of correlation is evidence against causation.)
Again, it’s the magnitude of the evidence that you can dispute. If you think people will get bribed regardless, then people getting bribed is only very weak evidence for a deep state. But it’s still evidence. Just like seeing bank robbers is evidence for a large society of bank robbers. (Because seeing no bank robbers would be evidence against such a society.)
It feels to me that “evidence of X” as colloquially used might be a stronger phrase than “evidence for X”, and almost as strong as “proof of X”. So maybe correlation is evidence for causation, but isn’t evidence of causation :-)
If something would be equally certain in two cases it can fundamentally constitute no evidence at all, even in the Bayesian sense. Suppose the three alternatives are Honest administrators, business as Usual, and Deep state. We observe some Bribes. Then P(B|U)=P(B|D)=1 and P(B|H)=0. Given appropriate priors then, P(B)=P(U)+P(D). And therefore
P(D|B)=P(D)P(D)+P(U)
We haven’t really updated at all our odds of Deep state being the explanation instead of business as Usual. We have ruled out the Honest hypothesis but I didn’t have many illusions about that. It’s a fairly extreme perspective—you could get a tiny update if you posited that P(B|U) is slightly lower than 1 or whatever—but I think for all practical matters it does round up to this calculation.
Yes, if you assume that the probability of seeing an observation was 100% under your favorite model then seeing it doesn’t update you away from that model, but that assumption is obviously not true. (And I already conceded that the update is marginal!)
I’d say the probability of seeing some resistance or corruption in virtually any administration is damn close to 100%.