Thanks for the summary! I agree that this is missing some extra consideration for programs that are planning / searching at test time. We normally think of Google Maps as non-agenty, “tool-like,” “task-directed,” etc, but it’s performing a search for the best route from A to B, and capable of planning to overcome obstacles—as long as those obstacles are within the ontology of its map of ways from A to B.
A thermostat is dumber than Google Maps, but its data is more closely connected to the real world (local temperature rather than general map), and its output is too (directly controlling a heater rather than displaying directions). If we made a “Google Thermostat Maps” website that let you input your thermostat’s state, and showed you a heater control value, it would perform the same computations as your thermostat but lose its apparent agency. The condition for us treating the thermostat like an agent isn’t just what computation it’s doing, it’s that its input, search (such as it is), and output ontologies match and extend into the real world well enough that even very simple computation can produce behavior suitable for the intentional stance.
Thanks for the summary! I agree that this is missing some extra consideration for programs that are planning / searching at test time. We normally think of Google Maps as non-agenty, “tool-like,” “task-directed,” etc, but it’s performing a search for the best route from A to B, and capable of planning to overcome obstacles—as long as those obstacles are within the ontology of its map of ways from A to B.
A thermostat is dumber than Google Maps, but its data is more closely connected to the real world (local temperature rather than general map), and its output is too (directly controlling a heater rather than displaying directions). If we made a “Google Thermostat Maps” website that let you input your thermostat’s state, and showed you a heater control value, it would perform the same computations as your thermostat but lose its apparent agency. The condition for us treating the thermostat like an agent isn’t just what computation it’s doing, it’s that its input, search (such as it is), and output ontologies match and extend into the real world well enough that even very simple computation can produce behavior suitable for the intentional stance.