It is not a problem for likelihood ratios because the inclusion of A or of ~A in the condition of each probability in (2) screens off P(A|X)
That your prose is not accompanied by any equations makes it very effortful to filter all its possible meanings to find the intelligent one you no doubt intend, and impractical to respond. I might disagree that
In order to simultaneously have B and C be independent given X and given XA, they need to have a complicated dependence given X~A, this dependence itself depending on P(A|X).
but I can’t be sure, and I have nothing precise to refer back to. (You don’t have to oblige me, of course.)
It is not a problem for likelihood ratios because the inclusion of A or of ~A in the condition of each probability in (2) screens off P(A|X)
That your prose is not accompanied by any equations makes it very effortful to filter all its possible meanings to find the intelligent one you no doubt intend, and impractical to respond.
First, that is not a response to your quote from me. In that quote I was in fact pointing to elements of an equation.
I might disagree that
In order to simultaneously have B and C be independent given X and given XA, they need to have a complicated dependence given X~A, this dependence itself depending on P(A|X).
but I can’t be sure, and I have nothing precise to refer back to. (You don’t have to oblige me, of course.)
I believe that (7) qualifies as a complicated dependence given ~A, which itself depends on P(A). This is just the result of my exploration of the issue, but you asked for it.
Also, I would like to reiterate:
I would have expected this to be clear if you had applied your questions to my example, and tried to construct a corresponding example in which Pev’s can be multiplied.
Then just suppose there are some medical diagnostics with these probabilities, etc. But see below for a comparison of this with your coin scenario.
More generally, let a=P(A)=1/4, b=P(B), c=P(C), x=P(BC), y=P(AC), z=P(AB), t=P(ABC).
The “before and after independence” assumption [BAI] is that x=bc and at=yz
The “independent tests” assumption [IT] is that (x-t)=(b-z)(c-y) and at=yz.
(2) How the “independent tests” assumptions are “stable”?
As I stated earlier, the condition A or the condition ~A screens off P(A|X).
Neither [BAI] nor [IT] is stable with respect to the variable a=P(A), and I see no equation involving anything I’d call “screening off”, though there might be one somewhere.
In any case, that does not interest me, because your equation (7) has my attention:
A qualitative distinction between my would-be medical scenario [EG1] and your coin scenario is that medical diagnostics, and particularly their dependence/independence, are not always “causally justified”, but two coin flips can be seen as independent because they visibly just don’t interact.
I bet your (7) gives a good description of this somehow, or something closely related… But I still have to formalize my thoughts about this.
Let me think about it awhile, and I’ll post again if I understand or wonder something more precise.
Then just suppose there are some medical diagnostics with these probabilities, etc.
You just glossed over the whole point of this exercise. The problem is that values such as P(ABC) are combined facts about the population and both tests. Try defining the scenario using only facts about the population in isolation (P(A)), about the tests in isolation (P(B|A), P(C|A), P(B|~A), P(C|~A)), and the dependence between the tests (P(B|AC), P(B|A~C), P(B|~AC), P(B|~A~C), [EDIT: removed redundant terms, I got carried away when typing out the permutations]). The point is to demonstrate how you have to contrive certain properties of the dependence between the tests, the conditional probabilities given ~A, to make summary properties you care about work out for a specific separate fact about the population, P(A).
That your prose is not accompanied by any equations makes it very effortful to filter all its possible meanings to find the intelligent one you no doubt intend, and impractical to respond. I might disagree that
but I can’t be sure, and I have nothing precise to refer back to. (You don’t have to oblige me, of course.)
First, that is not a response to your quote from me. In that quote I was in fact pointing to elements of an equation.
By “before” independence, we have
Decompose B into BA or ~AB
Apply “after” independence to (3)
Combing (1) and (4)
Solve for P(B|~AC)
Use Bayes’ Law substitute out conditional probabilities of A
I believe that (7) qualifies as a complicated dependence given ~A, which itself depends on P(A). This is just the result of my exploration of the issue, but you asked for it.
Also, I would like to reiterate:
Here’s a simple example where pev(A,BC)=pev(A,B)pev(A,C):
[EG1] P(A)=1/4, P(B)=P(C)=1/2, P(BC)=1/4, P(AC)=P(BC)=1/16, P(ABC)=1/64.
Then just suppose there are some medical diagnostics with these probabilities, etc. But see below for a comparison of this with your coin scenario.
More generally, let a=P(A)=1/4, b=P(B), c=P(C), x=P(BC), y=P(AC), z=P(AB), t=P(ABC).
The “before and after independence” assumption [BAI] is that x=bc and at=yz
The “independent tests” assumption [IT] is that (x-t)=(b-z)(c-y) and at=yz.
Neither [BAI] nor [IT] is stable with respect to the variable a=P(A), and I see no equation involving anything I’d call “screening off”, though there might be one somewhere.
In any case, that does not interest me, because your equation (7) has my attention:
A qualitative distinction between my would-be medical scenario [EG1] and your coin scenario is that medical diagnostics, and particularly their dependence/independence, are not always “causally justified”, but two coin flips can be seen as independent because they visibly just don’t interact.
I bet your (7) gives a good description of this somehow, or something closely related… But I still have to formalize my thoughts about this.
Let me think about it awhile, and I’ll post again if I understand or wonder something more precise.
I assume you meant:
[EG1] P(A)=1/4, P(B)=P(C)=1/2, P(BC)=1/4, P(AC)=P(AB)=1/16, P(ABC)=1/64.
You just glossed over the whole point of this exercise. The problem is that values such as P(ABC) are combined facts about the population and both tests. Try defining the scenario using only facts about the population in isolation (P(A)), about the tests in isolation (P(B|A), P(C|A), P(B|~A), P(C|~A)), and the dependence between the tests (P(B|AC), P(B|A~C), P(B|~AC), P(B|~A~C), [EDIT: removed redundant terms, I got carried away when typing out the permutations]). The point is to demonstrate how you have to contrive certain properties of the dependence between the tests, the conditional probabilities given ~A, to make summary properties you care about work out for a specific separate fact about the population, P(A).