I imagine the map-maker as the whole end-to-end process, part of which may be in the “environment” itself. So the map-maker would not just be the camera, but also the photon fields entering the camera, the light source, the physical objects reflecting the light, and anything else along the causal path between the camera and the “territory”. On the other end, the map-maker includes whatever interpretive machinery computes things from the camera data (including e.g. an MPEG converter), all the way to the part which handles queries on the “map”. The reason for taking such an expansive view of “map-maker” is that we want to talk about maps matching territories, and the whole cause-and-effect process which makes the map match the territory, so we need the whole end-to-end process.
In principle, neither the map nor the territory has to be a causal model—bits recorded by a video camera could be a “map” of some territory, for instance. But for purposes of embedded agency, we’re mainly interested in cases where the map and territory are causal, because that’s what we need for agenty reasoning: optimization, reflection on our own map-making, etc.
I imagine the map-maker as the whole end-to-end process, part of which may be in the “environment” itself. So the map-maker would not just be the camera, but also the photon fields entering the camera, the light source, the physical objects reflecting the light, and anything else along the causal path between the camera and the “territory”. On the other end, the map-maker includes whatever interpretive machinery computes things from the camera data (including e.g. an MPEG converter), all the way to the part which handles queries on the “map”. The reason for taking such an expansive view of “map-maker” is that we want to talk about maps matching territories, and the whole cause-and-effect process which makes the map match the territory, so we need the whole end-to-end process.
(This also means that I’m not thinking of “maps” just in terms of mutual information—there has to be a process which causes the map to have mutual information with the territory. Can’t make a streetmap by sitting in an apartment with the blinds drawn, etc.)
In principle, neither the map nor the territory has to be a causal model—bits recorded by a video camera could be a “map” of some territory, for instance. But for purposes of embedded agency, we’re mainly interested in cases where the map and territory are causal, because that’s what we need for agenty reasoning: optimization, reflection on our own map-making, etc.