You can simply define redunds over collections of subsets
As in, take a set of variables X, then search for some set of its (non-overlapping?) subsets such that there’s a nontrivial natural latent over it? Right, it’s what we’re doing here as well.
Potentially, a single X can then generate several such sets, corresponding to different levels of organization. This should work fine, as long as we demand that the latents defined over “coarser” sets of subsets contain some information not present in “finer” latents.
The natural next step is to then decompose the set of all latents we’ve discovered, factoring out information redundant across them. The purpose of this is to remove lower-level information from higher-level latents.
Which almost replicates my initial picture: the higher-level latents now essentially contain just the synergistic information. The difference is that it’s “information redundant across all ‘coarse’ variables in some coarsening of X and not present in any subset of the ‘finer’ variables defining those coarse variables”, rather than defined in a “self-contained” manner for every subset of X.
That definition does feel more correct to me!
Recent impossibility result seems to rule out general multivariate PID that guarantees non-negativity of all components, though partial entropy decomposition may be more tractable
As in, take a set of variables X, then search for some set of its (non-overlapping?) subsets such that there’s a nontrivial natural latent over it? Right, it’s what we’re doing here as well.
I think the subsets can actually be partially overlapping, for instance you may have a λ that’s approximately deterministic w.r.t {X1,X2} and {X2,X3} but not X2 alone, weak redundancy (approximately deterministic w.r.t {¯¯¯¯¯¯Xi}∀i) is also an example of redunds across overlapping subsets
As in, take a set of variables X, then search for some set of its (non-overlapping?) subsets such that there’s a nontrivial natural latent over it? Right, it’s what we’re doing here as well.
Potentially, a single X can then generate several such sets, corresponding to different levels of organization. This should work fine, as long as we demand that the latents defined over “coarser” sets of subsets contain some information not present in “finer” latents.
The natural next step is to then decompose the set of all latents we’ve discovered, factoring out information redundant across them. The purpose of this is to remove lower-level information from higher-level latents.
Which almost replicates my initial picture: the higher-level latents now essentially contain just the synergistic information. The difference is that it’s “information redundant across all ‘coarse’ variables in some coarsening of X and not present in any subset of the ‘finer’ variables defining those coarse variables”, rather than defined in a “self-contained” manner for every subset of X.
That definition does feel more correct to me!
Thanks, that’s useful.
I think the subsets can actually be partially overlapping, for instance you may have a λ that’s approximately deterministic w.r.t {X1,X2} and {X2,X3} but not X2 alone, weak redundancy (approximately deterministic w.r.t {¯¯¯¯¯¯Xi}∀i) is also an example of redunds across overlapping subsets