The linked paper introduces the key concept of factored spaced models / finite factored sets, structural independence, in a fully general setting using families of random elements. The key contribution is a general definition of the history object and a theorem that the history fully characterizes the semantic implications of the assumption that a family of random elements is independent. This is analogous to how d-separation precisely characterizes which nodal variables are independent given some nodal variables in any probability distribution which fulfills the markov property on the graph.
Abstract: Structural independence is the (conditional) independence that arises from the structure rather than the precise numerical values of a distribution. We develop this concept and relate it to d-separation and structural causal models.
Formally, let U=(Ui)i∈I be an independent family of random elements on a probability space (Ω,A,P). Let X, Y, and Z be arbitrary σ(U)-measurable random elements. We characterize all independences X⊥⊥Y∣Z implied by the independence of U and call these independences structural. Formally, these are the independences which hold in all probability measures P that render U independent and are absolutely continuous with respect to P, i.e., for all such P, it must hold that X⊥PY∣Z.
We introduce the history history(X∣Z):Ω→P(I), a combinatorial object that measures the dependence of X on Ui for each i∈I given Z. The independence of X and Y given Z is implied by the independence of U if and only if history(X∣Z)∩history(Y∣Z)=∅ almost surely with respect to P.
Finally, we apply this d-separation-like criterion in structural causal models to discover a causal direction in a toy setting.
A Theory of Structural Independence
Link post
The linked paper introduces the key concept of factored spaced models / finite factored sets, structural independence, in a fully general setting using families of random elements. The key contribution is a general definition of the history object and a theorem that the history fully characterizes the semantic implications of the assumption that a family of random elements is independent. This is analogous to how d-separation precisely characterizes which nodal variables are independent given some nodal variables in any probability distribution which fulfills the markov property on the graph.
Abstract: Structural independence is the (conditional) independence that arises from the structure rather than the precise numerical values of a distribution. We develop this concept and relate it to d-separation and structural causal models.
Formally, let U=(Ui)i∈I be an independent family of random elements on a probability space (Ω,A,P). Let X, Y, and Z be arbitrary σ(U)-measurable random elements. We characterize all independences X⊥⊥Y∣Z implied by the independence of U and call these independences structural. Formally, these are the independences which hold in all probability measures P that render U independent and are absolutely continuous with respect to P, i.e., for all such P, it must hold that X⊥PY∣Z.
We introduce the history history(X∣Z):Ω→P(I), a combinatorial object that measures the dependence of X on Ui for each i∈I given Z. The independence of X and Y given Z is implied by the independence of U if and only if history(X∣Z)∩history(Y∣Z)=∅ almost surely with respect to P.
Finally, we apply this d-separation-like criterion in structural causal models to discover a causal direction in a toy setting.