People (and robots) model the world by starting with sensor data (vision, proprioception, etc.), then finding low-level (spatiotemporally-localized) patterns in that data, then higher-level patterns in the patterns, patterns in the patterns in the patterns, etc. I’m trying to understand how this relates to “abstraction” as you’re talking about it.
Sensor data, say the bits recorded by a video camera, is not a causal diagram, but it is already an “abstraction” in the sense that it has mutual information with the part of the world it’s looking at, but is many orders of magnitude less complicated. Do you see a video camera as an abstraction-creator / map-maker by itself?
What if the video camera has a MPEG converter? MPEGs can (I think) recognize that low-level pattern X tends to follow low-level pattern Y, and this is more-or-less the same low-level primitive out of which which humans build their sophisticated causal understanding of the world (according to my current understanding of the human brain’s world-modeling algorithms). So is a video camera with MPEG converter an abstraction-creator / map-maker? What’s your thinking?
I imagine the map-maker as the whole end-to-end process, part of which may be in the “environment” itself. So the map-maker would not just be the camera, but also the photon fields entering the camera, the light source, the physical objects reflecting the light, and anything else along the causal path between the camera and the “territory”. On the other end, the map-maker includes whatever interpretive machinery computes things from the camera data (including e.g. an MPEG converter), all the way to the part which handles queries on the “map”. The reason for taking such an expansive view of “map-maker” is that we want to talk about maps matching territories, and the whole cause-and-effect process which makes the map match the territory, so we need the whole end-to-end process.
In principle, neither the map nor the territory has to be a causal model—bits recorded by a video camera could be a “map” of some territory, for instance. But for purposes of embedded agency, we’re mainly interested in cases where the map and territory are causal, because that’s what we need for agenty reasoning: optimization, reflection on our own map-making, etc.
People (and robots) model the world by starting with sensor data (vision, proprioception, etc.), then finding low-level (spatiotemporally-localized) patterns in that data, then higher-level patterns in the patterns, patterns in the patterns in the patterns, etc. I’m trying to understand how this relates to “abstraction” as you’re talking about it.
Sensor data, say the bits recorded by a video camera, is not a causal diagram, but it is already an “abstraction” in the sense that it has mutual information with the part of the world it’s looking at, but is many orders of magnitude less complicated. Do you see a video camera as an abstraction-creator / map-maker by itself?
What if the video camera has a MPEG converter? MPEGs can (I think) recognize that low-level pattern X tends to follow low-level pattern Y, and this is more-or-less the same low-level primitive out of which which humans build their sophisticated causal understanding of the world (according to my current understanding of the human brain’s world-modeling algorithms). So is a video camera with MPEG converter an abstraction-creator / map-maker? What’s your thinking?
I imagine the map-maker as the whole end-to-end process, part of which may be in the “environment” itself. So the map-maker would not just be the camera, but also the photon fields entering the camera, the light source, the physical objects reflecting the light, and anything else along the causal path between the camera and the “territory”. On the other end, the map-maker includes whatever interpretive machinery computes things from the camera data (including e.g. an MPEG converter), all the way to the part which handles queries on the “map”. The reason for taking such an expansive view of “map-maker” is that we want to talk about maps matching territories, and the whole cause-and-effect process which makes the map match the territory, so we need the whole end-to-end process.
(This also means that I’m not thinking of “maps” just in terms of mutual information—there has to be a process which causes the map to have mutual information with the territory. Can’t make a streetmap by sitting in an apartment with the blinds drawn, etc.)
In principle, neither the map nor the territory has to be a causal model—bits recorded by a video camera could be a “map” of some territory, for instance. But for purposes of embedded agency, we’re mainly interested in cases where the map and territory are causal, because that’s what we need for agenty reasoning: optimization, reflection on our own map-making, etc.