The main issue with “my model of” + “this door” = “my model of this door”, taken literally, is that there’s no semantics. It’s the semantics which I expect to need something quine-like.
Adding actions is indeed a big step, and I still don’t know the best way to do that. Main strategies I’ve thought about are:
keep the model itself passive, but include an agent with actions in the model itself, and then require correctness of the model-abstraction. (In other words, put an agent in the map, then require map-territory correspondence.)
Something thermodynamic-esque but not predictive processing. This one seems most promising long-term but also I’m still most confused about how to set it up.
I think you’re saying that I’m proposing how to label everything but not describing what those things are or do. (Correct?) I guess I’d say we learn general rules to follow with the “my model of” piece-of-thought, and exceptions to those rules, and exceptions to the exceptions, etc. Like “the relation between my-model-of-X and my-model-of-Y is the same as the relation between X and Y” could be an imperfect rule with various exceptions. See my “Python code runs the same on Windows and Mac” example here.