The non-uniqueness doesn’t make causality absent or irrelevant; it must means there are multiple minimal representations that use causality. The causality arises in how your node connections are asymmetric. If the relativity of simultaneity (observers seeing the same events in a different time order) doesn’t obviate causality, neither should he existence of multiple causal networks.
There are indeed equivalent models that use purely symmetric node connections (or none at all in the case of the superexponential pair wise conditional independence table across all variables), but (correct me if I’m wrong) by throwing away the information graphically represented by the arrows, you no longer have a maximally efficient encoding of the joint probability distribution (even though it’s certainly not as bad as the superexponential table).
First, authors like Pearl do not use “causality” to mean just that there is a directed edge in a Bayesian network (i.e. that certain conditional independence properties hold). Rather, he uses it to mean that the model describes what happens under interventions. One can see the difference by comparing Rain → WetGrass with WetGrass → Rain (which are equivalent as Bayesian networks). Of course, maybe he is confused and the difference will dissolve under more careful consideration, but I think this shows one should be careful in claiming that Bayes networks encode our best understanding of causality.
Second, do we need Bayesian networks to economically represent distributions? This is slightly subtle.
We do not need the directed arrows when representing a particular distribution. For example, suppose a distribution P(A,B,C) is represented by the Bayesian network A → B ← C. Expanding the definition, this means that the joint distribution can be factored as
P(A=a,B=b,C=c) = P1(A=a) P2(B=b|A=a,C=c) P3(C=c)
where P1 and P3 are the marginal distributions of A and B, and P2 is the conditional distribution of B. So the data we needed to specify P were two one-column tables specifying P1 and P3, and a three-column table specifying P2(a|b,c) for all values of a,b,c. But now note that we do not gain very much by knowing that these are probability distributions. To save space it is enough to note that P factors as
P(A=a,B=b,C=c) = F1(a) F2(b,a,c) F3(c)
for some real-valued functions F1, F2, and F3. In other words, that P is represented by a Markov network A—B - C. The directions on the edges were not essential.
And indeed, typical algorithms for inference given a probability distribution, such as belief propagation, do not make use of the Bayesian structure. They work equally well for directed and undirected graphs.
Rather, the point of Bayesian versus Markov networks is that the class of probability distributions that can be represented by them are different. So they are useful when we try to learn a probability distribution, and want to cut down the search space by constraining the distribution by some independence relations that we know a priori.
Bayesian networks are popular because they let us write down many independence assumptions that we know hold for practical problems. However, we then have to ask how we know those particular independence relations hold. And that’s because they correspond to causual relations! The reason Bayesian networks are popular with human researchers is that they correspond well with the notion of causality that humans use. We don’t know that the Armchairians would also find them useful.
To save space it is enough to note that P factors as
P(A=a,B=b,C=c) = F1(a) F2(b,a,c) F3(c)
for some real-valued functions F1, F2, and F3. In other words, that P is represented by a Markov network A—B - C. The directions on the edges were not essential.
Can’t the directions be recovered automatically from that expression, though? That is, discarding the directions from the notation of conditional probabilities doesn’t actually discard them.
The reconstruction algorithm would label every function argument as “primary” or “secondary”, begin with no arguments labelled, and repeatedly do this:
For every function with no primary variable and exactly one unlabelled variable, label that variable as primary and all of its occurrences as arguments to other functions as secondary.
When all arguments are labelled, make a graph of the variables with an arrow from X to Y whenever X and Y occur as arguments to the same function, X as secondary and Y as primary. If the functions F1 F2 etc. originally came from a Bayesian network, won’t this recover that precise network?
If the original graph was A ← B → C, the expression would have been F1(a,b) F2(b) F3(c,b).
If the functions F1 F2 etc. originally came from a Bayesian network, won’t this recover that precise network?
I think this is right, if you know that the factors were learned by fitting them to a Bayesian network, you can recover what that network must have been. And you can go even further, if you only have a joint distribution you can use the techniques of the original article to see which Bayesian networks could be consistent with it.
But there is a separate question about why we are interested in Bayesian networks in the first place. SilasBarta seemed to claim that you are naturally led to them if you are interested in representing probability distributions efficiently. But for that purpose (I claim), you only need the idea of factors, not the directed graph structure. E.g. a probability distribution which fits the (equivalent) Bayesian networks A → B → C or A ← B ← C or A ← B → C can be efficiently represented as F1(a,b) F2(b,c). You would not think of representing it as F1(a) F2(a,b) F3(b,c) unless you were already interested in causality.
In other words, that P is represented by a Markov network A—B - C. The directions on the edges were not essential.
On the contrary, they are important and store information about the relationships that saves you time and space. Like I said in my linked comment, the direction of the arrows between A,C and B tell you whether conditioning on B (perhaps by separating it out into buckets of various values) creates or destroys mutual information between A and C. That saves you from having to explicitly write out all the combinations of conditional (in)dependence.
In other words, that P is represented by a Markov network A—B - C. The directions on the edges were not essential.
Oops, on second thought the factorization is equivalent to the complete triangle, not a line. But this doesn’t change the point that the space requirements are determined by the factors, not the graph structure, so the two representations will use the same amount of space.
On the contrary, they are important and store information about the relationships that saves you time and space.
All independence relations are implicit in the distribution itself, so the graph can only save you time, not space.
It is true that knowing a minimal Bayes network or a minimal Markov network for a distribution lets you read of certain independence assumptions quickly. But it doesn’t save you from having to write out all the combinations. There are exponentially many possible conditional independences, each of which may hold or not, so no sub-exponential representation can get encode all of them. And indeed, there are some kinds of independence assumptions that can be expressed as Bayesian networks but not Markov networks, and vice versa. Even in everyday machine learning, it is not the case that Bayesian networks is always the best representation.
You also do not motivate why someone would be interested in a big list of conditional independencies for its own sake. Surely, what we ultimately want to know is e.g. the probability that it will rain tomorrow, not whether or not rain is correlated with sprinklers.
But it doesn’t save you from having to write out all the combinations.
It saves you from having to write them until needed, in which case they can be extracted by walking through the graph rather than doing a lookup on a superexponential table.
You also do not motivate why someone would be interested in a big list of conditional independencies for its own sake. Surely, what we ultimately want to know is e.g. the probability that it will rain tomorrow, not whether or not rain is correlated with sprinklers.
Yes, the question was what they would care about if they were only interested in predictions. And so I think I’ve motivated why they would care about conditional (in)dependencies: it determines the (minimal) set of variables they need to look at! Whatever minimal method of representing their knowledge will then have these arrows (from one of the networks that fits the data).
If you require that causality definitions be restricted to (uncorrelated) counterfactual operations (like Pearl’s “do” operation), then sure, the Armcharians won’t do that specific computation. But if you use the definition of causality from this article, then I think it’s clear that efficiency considerations will lead them to use something isomorphic to it.
It saves you from having to write them until needed
I was saying that not every independence property is representable as a Bayesian network.
Whatever minimal method of representing their knowledge will then have these arrows (from one of the networks that fits the data).
No! Once you have learned a distribution using Bayesian network-based methods, the minimal representation of it is the set of factors. You don’t need the direction of the arrows any more.
I was saying that not every independence property is representable as a Bayesian network.
You mean when all variables are independent, or some other class of cases?
No! Once you have learned a distribution using Bayesian network-based methods, the minimal representation of it is the set of factors. You don’t need the direction of the arrows any more.
Read the rest: you need the arrows if you want to efficiently look up the conditional (in)dependencies.
You mean when all variables are independent, or some other class of cases?
Well, there are doubly-exponentially many possibilities…
The usual example for Markov networks is four variables connected in a square. The corresponding independence assumption is that any two opposite corners are independent given the other two corners. There is no Bayesian network encoding exactly that.
you need the arrows if you want to efficiently look up the conditional (in)dependencies.
But again, why would you want that? As I said in the grand^(n)parent, you don’t need to when doing inference.
The usual example for Markov networks is four variables connected in a square. The corresponding independence assumption is that any two opposite corners are independent given the other two corners. There is no Bayesian network encoding exactly that.
Okay, I’m recalling the “troublesome” cases that Pearl brings up, which gives me a better idea of what you mean. But this is not a counterexample. It just means that you can’t do it on a Bayes net with binary nodes. You can still represent that situation by merging (either pair of) the screening nodes into one node that covers all combinations of possibilities between them.
Do you have another example?
But again, why would you want that? As I said in the grand^(n)parent, you don’t need to when doing inference.
Sure you do: you want to know which and how many variables you have to look up to make your prediction.
merging (either pair of) the screening nodes into one node
Then the network does not encode the conditional independence between the two variables that you merged.
The task you have to do when making predictions is marginalization: in order to computer P(Rain|WetGrass), you need to compute the sum of P(Rain|WetGrass, X,Y,Z) for all possible values of the variables X, Y, Z that you didn’t observe. Here it is very helpful to have the distribution factored into a tree, since that can make it feasible to do variable elimination (or related algorithms like belief propagation). But the directions on the edges in the tree don’t matter, you can start at any leaf node and work across.
The non-uniqueness doesn’t make causality absent or irrelevant; it must means there are multiple minimal representations that use causality. The causality arises in how your node connections are asymmetric. If the relativity of simultaneity (observers seeing the same events in a different time order) doesn’t obviate causality, neither should he existence of multiple causal networks.
There are indeed equivalent models that use purely symmetric node connections (or none at all in the case of the superexponential pair wise conditional independence table across all variables), but (correct me if I’m wrong) by throwing away the information graphically represented by the arrows, you no longer have a maximally efficient encoding of the joint probability distribution (even though it’s certainly not as bad as the superexponential table).
I guess there are two points here.
First, authors like Pearl do not use “causality” to mean just that there is a directed edge in a Bayesian network (i.e. that certain conditional independence properties hold). Rather, he uses it to mean that the model describes what happens under interventions. One can see the difference by comparing Rain → WetGrass with WetGrass → Rain (which are equivalent as Bayesian networks). Of course, maybe he is confused and the difference will dissolve under more careful consideration, but I think this shows one should be careful in claiming that Bayes networks encode our best understanding of causality.
Second, do we need Bayesian networks to economically represent distributions? This is slightly subtle.
We do not need the directed arrows when representing a particular distribution. For example, suppose a distribution P(A,B,C) is represented by the Bayesian network A → B ← C. Expanding the definition, this means that the joint distribution can be factored as
P(A=a,B=b,C=c) = P1(A=a) P2(B=b|A=a,C=c) P3(C=c)
where P1 and P3 are the marginal distributions of A and B, and P2 is the conditional distribution of B. So the data we needed to specify P were two one-column tables specifying P1 and P3, and a three-column table specifying P2(a|b,c) for all values of a,b,c. But now note that we do not gain very much by knowing that these are probability distributions. To save space it is enough to note that P factors as
P(A=a,B=b,C=c) = F1(a) F2(b,a,c) F3(c)
for some real-valued functions F1, F2, and F3. In other words, that P is represented by a Markov network A—B - C. The directions on the edges were not essential.
And indeed, typical algorithms for inference given a probability distribution, such as belief propagation, do not make use of the Bayesian structure. They work equally well for directed and undirected graphs.
Rather, the point of Bayesian versus Markov networks is that the class of probability distributions that can be represented by them are different. So they are useful when we try to learn a probability distribution, and want to cut down the search space by constraining the distribution by some independence relations that we know a priori.
Bayesian networks are popular because they let us write down many independence assumptions that we know hold for practical problems. However, we then have to ask how we know those particular independence relations hold. And that’s because they correspond to causual relations! The reason Bayesian networks are popular with human researchers is that they correspond well with the notion of causality that humans use. We don’t know that the Armchairians would also find them useful.
Can’t the directions be recovered automatically from that expression, though? That is, discarding the directions from the notation of conditional probabilities doesn’t actually discard them.
The reconstruction algorithm would label every function argument as “primary” or “secondary”, begin with no arguments labelled, and repeatedly do this:
For every function with no primary variable and exactly one unlabelled variable, label that variable as primary and all of its occurrences as arguments to other functions as secondary.
When all arguments are labelled, make a graph of the variables with an arrow from X to Y whenever X and Y occur as arguments to the same function, X as secondary and Y as primary. If the functions F1 F2 etc. originally came from a Bayesian network, won’t this recover that precise network?
If the original graph was A ← B → C, the expression would have been F1(a,b) F2(b) F3(c,b).
I think this is right, if you know that the factors were learned by fitting them to a Bayesian network, you can recover what that network must have been. And you can go even further, if you only have a joint distribution you can use the techniques of the original article to see which Bayesian networks could be consistent with it.
But there is a separate question about why we are interested in Bayesian networks in the first place. SilasBarta seemed to claim that you are naturally led to them if you are interested in representing probability distributions efficiently. But for that purpose (I claim), you only need the idea of factors, not the directed graph structure. E.g. a probability distribution which fits the (equivalent) Bayesian networks A → B → C or A ← B ← C or A ← B → C can be efficiently represented as F1(a,b) F2(b,c). You would not think of representing it as F1(a) F2(a,b) F3(b,c) unless you were already interested in causality.
On the contrary, they are important and store information about the relationships that saves you time and space. Like I said in my linked comment, the direction of the arrows between A,C and B tell you whether conditioning on B (perhaps by separating it out into buckets of various values) creates or destroys mutual information between A and C. That saves you from having to explicitly write out all the combinations of conditional (in)dependence.
Oops, on second thought the factorization is equivalent to the complete triangle, not a line. But this doesn’t change the point that the space requirements are determined by the factors, not the graph structure, so the two representations will use the same amount of space.
All independence relations are implicit in the distribution itself, so the graph can only save you time, not space.
It is true that knowing a minimal Bayes network or a minimal Markov network for a distribution lets you read of certain independence assumptions quickly. But it doesn’t save you from having to write out all the combinations. There are exponentially many possible conditional independences, each of which may hold or not, so no sub-exponential representation can get encode all of them. And indeed, there are some kinds of independence assumptions that can be expressed as Bayesian networks but not Markov networks, and vice versa. Even in everyday machine learning, it is not the case that Bayesian networks is always the best representation.
You also do not motivate why someone would be interested in a big list of conditional independencies for its own sake. Surely, what we ultimately want to know is e.g. the probability that it will rain tomorrow, not whether or not rain is correlated with sprinklers.
It saves you from having to write them until needed, in which case they can be extracted by walking through the graph rather than doing a lookup on a superexponential table.
Yes, the question was what they would care about if they were only interested in predictions. And so I think I’ve motivated why they would care about conditional (in)dependencies: it determines the (minimal) set of variables they need to look at! Whatever minimal method of representing their knowledge will then have these arrows (from one of the networks that fits the data).
If you require that causality definitions be restricted to (uncorrelated) counterfactual operations (like Pearl’s “do” operation), then sure, the Armcharians won’t do that specific computation. But if you use the definition of causality from this article, then I think it’s clear that efficiency considerations will lead them to use something isomorphic to it.
I was saying that not every independence property is representable as a Bayesian network.
No! Once you have learned a distribution using Bayesian network-based methods, the minimal representation of it is the set of factors. You don’t need the direction of the arrows any more.
You mean when all variables are independent, or some other class of cases?
Read the rest: you need the arrows if you want to efficiently look up the conditional (in)dependencies.
Well, there are doubly-exponentially many possibilities…
The usual example for Markov networks is four variables connected in a square. The corresponding independence assumption is that any two opposite corners are independent given the other two corners. There is no Bayesian network encoding exactly that.
But again, why would you want that? As I said in the grand^(n)parent, you don’t need to when doing inference.
Okay, I’m recalling the “troublesome” cases that Pearl brings up, which gives me a better idea of what you mean. But this is not a counterexample. It just means that you can’t do it on a Bayes net with binary nodes. You can still represent that situation by merging (either pair of) the screening nodes into one node that covers all combinations of possibilities between them.
Do you have another example?
Sure you do: you want to know which and how many variables you have to look up to make your prediction.
Then the network does not encode the conditional independence between the two variables that you merged.
The task you have to do when making predictions is marginalization: in order to computer P(Rain|WetGrass), you need to compute the sum of P(Rain|WetGrass, X,Y,Z) for all possible values of the variables X, Y, Z that you didn’t observe. Here it is very helpful to have the distribution factored into a tree, since that can make it feasible to do variable elimination (or related algorithms like belief propagation). But the directions on the edges in the tree don’t matter, you can start at any leaf node and work across.