Yes, even without the extra condition. Let a = P(A), b = P(B), c = P(A & B).
P(B|A) > P(B) is equivalent to c > ab.
P(~B|~A) > P(~B) is equivalent to 1-a-b+c > (1-a)(1-b) = 1 - a—b + ab, which is equivalent to c > ab, which is the hypothesis.
As a check that the conventional definition of P(B|A)=0 when P(A)=0 doesn’t affect things, if P(A)=0, P(A)=1, P(B)=0, or P(B)=1, then P(B|A) = P(B), making the antecedent false and the proposition trivially true.
Does P(B|A) > P(B) imply P(~B|~A) > P(~B)?
ETA: Assume all probabilities are positive.
Yes, assuming 0 and 1 are not probabilities.
Yes, the math works out—ig’f whfg n erfgngrzrag bs gur pynvz gung gur nofrapr bs rivqrapr vf rivqrapr bs nofrapr.
Ironically enough, I’m using this to prove that absence of “that particular proof” is not evidence of absence.
Hey, as long as you do your math correctly … :D
Yes, even without the extra condition. Let a = P(A), b = P(B), c = P(A & B).
P(B|A) > P(B) is equivalent to c > ab.
P(~B|~A) > P(~B) is equivalent to 1-a-b+c > (1-a)(1-b) = 1 - a—b + ab, which is equivalent to c > ab, which is the hypothesis.
As a check that the conventional definition of P(B|A)=0 when P(A)=0 doesn’t affect things, if P(A)=0, P(A)=1, P(B)=0, or P(B)=1, then P(B|A) = P(B), making the antecedent false and the proposition trivially true.