Nice, I’ve gestured at similar things in this comment, conceptually the main thing you want to model is variables that control the relationships between other variables, the upshot is you can continue the recursion indefinitely: Once you have second order variables that control the relationships between other variables, you can then have variables that control the relationship among second order variables and so on.
Using function calls as an analogy: When you’re executing a function that itself makes a lot of function calls, there are two main ways these function calls can be useful:
The results of these function calls might be used to compute the final output
The results of these function calls can tell you what other function calls would be useful to make (e.g. if you want to find the shape of a glider, the position tells you which cells to look at to determine that)
an adequate version of this should also be turing complete which means it can accomodate shifting structures, & function calls seem like a good way to represent hierarchies of abstractions
CSI in bayesian networks also deals with the idea that the causal structure between variables changes over time/depending on context (you’re probably more interested in how relationships between levels of abstraction changes with context, but the two directions seem linked). I plan to explore the following variant at some point(not sure if it’s already in the literature):
Suppose that there is a variable Y that “controls” the causal structure of X, we use the good-old KL approximation to represent the error conditional on a particular value of YDKL(P(X|Y=y)∥ΠiP(Xi|XpaG(i),Y=y)) under a particular diagram G
You can imagine that the conditional distrbution initially approximately satisfies a diagram G1, but as you change the value of Y, the error for G1 goes up while the error for some other diagram G2 goes to 0
In particular, if Y is a continuous variable, and the conditional distribution P(X|Y=y) changes continuously with Y, then DKL(P(X|Y=y)∥ΠiP(Xi|XpaG(i),Y=y)) changes continuously with Y which is quite nice
So this is a formalism that deals with “context-dependent structure” in a way that plays well with continuity, and if you have discrete variables controlling the causal structure, you can use it to accommodate uncertainty over the discrete outcomes (that determine causal structure).
Nice, I’ve gestured at similar things in this comment, conceptually the main thing you want to model is variables that control the relationships between other variables, the upshot is you can continue the recursion indefinitely: Once you have second order variables that control the relationships between other variables, you can then have variables that control the relationship among second order variables and so on.
Using function calls as an analogy: When you’re executing a function that itself makes a lot of function calls, there are two main ways these function calls can be useful:
The results of these function calls might be used to compute the final output
The results of these function calls can tell you what other function calls would be useful to make (e.g. if you want to find the shape of a glider, the position tells you which cells to look at to determine that)
an adequate version of this should also be turing complete which means it can accomodate shifting structures, & function calls seem like a good way to represent hierarchies of abstractions
CSI in bayesian networks also deals with the idea that the causal structure between variables changes over time/depending on context (you’re probably more interested in how relationships between levels of abstraction changes with context, but the two directions seem linked). I plan to explore the following variant at some point(not sure if it’s already in the literature):
Suppose that there is a variable Y that “controls” the causal structure of X, we use the good-old KL approximation to represent the error conditional on a particular value of Y DKL(P(X|Y=y)∥ΠiP(Xi|XpaG(i),Y=y)) under a particular diagram G
You can imagine that the conditional distrbution initially approximately satisfies a diagram G1, but as you change the value of Y, the error for G1 goes up while the error for some other diagram G2 goes to 0
In particular, if Y is a continuous variable, and the conditional distribution P(X|Y=y) changes continuously with Y, then DKL(P(X|Y=y)∥ΠiP(Xi|XpaG(i),Y=y)) changes continuously with Y which is quite nice
So this is a formalism that deals with “context-dependent structure” in a way that plays well with continuity, and if you have discrete variables controlling the causal structure, you can use it to accommodate uncertainty over the discrete outcomes (that determine causal structure).
I’d be interested in updates on that if/when you do it.