I think it’s pretty clear why we think of the map-making sailboat as “having knowledge” even if it sinks, and it’s because our own model of the world expects maps to be legible to agents in the environment, and so we lump them into “knowledge” even before actually seeing someone use any particular map. You could try to predict this legibility part of how we think of knowledge from the atomic positions of the item itself, but you’re going to get weird edge cases unless you actually make a intentional-stance-level model of the surrounding environment to see who might read the map.
EDIT: I mean, the interesting thing about this to me is then asking the question of what this means about how granular to be when thinking about knowledge (and similar things).
Do you want to chat sometime about this?
I think it’s pretty clear why we think of the map-making sailboat as “having knowledge” even if it sinks, and it’s because our own model of the world expects maps to be legible to agents in the environment, and so we lump them into “knowledge” even before actually seeing someone use any particular map. You could try to predict this legibility part of how we think of knowledge from the atomic positions of the item itself, but you’re going to get weird edge cases unless you actually make a intentional-stance-level model of the surrounding environment to see who might read the map.
EDIT: I mean, the interesting thing about this to me is then asking the question of what this means about how granular to be when thinking about knowledge (and similar things).