If two variables, I and O, both make one value of E more likely than the other, that means the probability of I conditional on some value of E is different from the probability of I because I explains some of that value of E
That’s correct
but if you also know O, than this explains some of that value of E as well, and so P(I|E=x, O) should bd different.
This is generically true but doesn’t have to be true for every value of x.
Here’s one way to see why the graph in the post is right: look at all other casual graphs, and you will see they either fail to imply that I and O are independent (as our graph does), or imply independences or conditional independences that don’t exist in the data.
The post gives one example of how it can be true: the probabilities are compatible with the causal graph, I is independent of O given E = no, but I is not independent of O given E = yes.
Here’s one way to see why the graph in the post is right: look at all other casual graphs, and you will see they either fail to imply that I and O are independent (as our graph does), or imply independences or conditional independences that don’t exist in the data.
That’s correct
This is generically true but doesn’t have to be true for every value of x.
Here’s one way to see why the graph in the post is right: look at all other casual graphs, and you will see they either fail to imply that I and O are independent (as our graph does), or imply independences or conditional independences that don’t exist in the data.
I don’t really get how this can be true for some values of x but not others if the variable is binary
The post gives one example of how it can be true: the probabilities are compatible with the causal graph, I is independent of O given E = no, but I is not independent of O given E = yes.
Have you tried this exercise?