“Condensation of information always selects for goal-relevant information.” To me this seems either not true, or it generalizes the concept of “goal-relevant” so broadly that it doesnt seem useful to me. If one is actively trying to create abstractions that are useful to achieving some goal then it is true. But the general case of losing information need not be towards some goal. For instance, it’s easy to construct a lossy map that takes high dimensional data to low dimensional data, whether or not it’s useful seems like a different issue.
One might say that they are interested in abstractions in the case they are useful. They might also make an emperical claim (or a stylistic choice) that thinking about abstractions in the framework of goal-directed actions will be a fruitful way to do AI, study the brain, etc. etc., but these are emperical claims that will be borne out in how useful different research programs help us understand things, and are not a statement of fact as far as I can tell.
You might also reply to this, “no, condensation of information without goal-relevance is just condensation of information, but it is not an abstraction” but then the claim that an abstraction only exists with goal-relevance seems tautilogical.
For instance, it’s easy to construct a lossy map that takes high dimensional data to low dimensional data, whether or not it’s useful seems like a different issue.
Yep. Most such maps are useless (to you) because the goals you have occupy a small fraction of the possible goals in goal-space.
You might also reply to this, “no, condensation of information without goal-relevance is just condensation of information, but it is not an abstraction” but then the claim that an abstraction only exists with goal-relevance seems tautilogical.
Nope, all condensation of information is abstraction. Different abstractions imply different regions of goal-space are more likely to contain your goals.
“Condensation of information always selects for goal-relevant information.” To me this seems either not true, or it generalizes the concept of “goal-relevant” so broadly that it doesnt seem useful to me. If one is actively trying to create abstractions that are useful to achieving some goal then it is true. But the general case of losing information need not be towards some goal. For instance, it’s easy to construct a lossy map that takes high dimensional data to low dimensional data, whether or not it’s useful seems like a different issue.
One might say that they are interested in abstractions in the case they are useful. They might also make an emperical claim (or a stylistic choice) that thinking about abstractions in the framework of goal-directed actions will be a fruitful way to do AI, study the brain, etc. etc., but these are emperical claims that will be borne out in how useful different research programs help us understand things, and are not a statement of fact as far as I can tell.
You might also reply to this, “no, condensation of information without goal-relevance is just condensation of information, but it is not an abstraction” but then the claim that an abstraction only exists with goal-relevance seems tautilogical.
Yep. Most such maps are useless (to you) because the goals you have occupy a small fraction of the possible goals in goal-space.
Nope, all condensation of information is abstraction. Different abstractions imply different regions of goal-space are more likely to contain your goals.