I don’t currently understand why you’re focusing so much on synergistic information, rather than just using redundancy to do the whole abstraction hierarchy.
I think I see why you’ve gone that way, because examples with synergistic information aren’t able to be broken down cleanly. I agree that this is annoying, but it can just be ignored (like in Condensation, which just becomes more of an approximate result in those cases).
Another approach is to add another mechanism for breaking up the model, other than information independence. For example, you could allow causal cross-links in synergistic cases. If that makes sense? Z should be thought of as causally downstream of X and Y, and if you allow this link to be encoded in your model, then you get to break up X,Y,Z without increasing the complexity of the model overall.
I don’t currently understand why you’re focusing so much on synergistic information, rather than just using redundancy to do the whole abstraction hierarchy.
I think I see why you’ve gone that way, because examples with synergistic information aren’t able to be broken down cleanly. I agree that this is annoying, but it can just be ignored (like in Condensation, which just becomes more of an approximate result in those cases).
Another approach is to add another mechanism for breaking up the model, other than information independence. For example, you could allow causal cross-links in synergistic cases. If that makes sense? Z should be thought of as causally downstream of X and Y, and if you allow this link to be encoded in your model, then you get to break up X,Y,Z without increasing the complexity of the model overall.
Can it be? That seems like a pretty major feature of reality; ignoring it probably won’t lead anywhere productive.
Hmm, somewhat inelegant, but that seems like a workable idea...