I mean I do think it can happen in my system that you allocate an object for something that’s actually 0 or >1 objects, and I don’t have a procedure for resolving such map-territory mismatches yet, though I think it’s imaginable to have a procedure that defines new objects and tries to edit all the beliefs associated with the old object.
I definitely haven’t described how we determine when to create a new object to add to our world model, but one could imagine an algorithm checking when there’s some useful latent for explaining some observations, and then constructing a model for that object, and then creating a new object in the abstract reasoning engine. Yeah there’s still open work to do for how a correspondences between the constant symbol for our object and our (e.g. visual) model of the object can be formalized and used, but I don’t see why it wouldn’t be feasible.
I agree that we end up with a map that doesn’t actually fit the territory, but I think it’s fine if there’s a unresolveable mismatch somewhere. There’s still a useful correspondence in most places. (Sure logic would collapse from a contradiction but actually it’s all probabilistic somehow anyways.) Although of course we don’t have anything to describe that the territory is different from the map in our system yet. This is related to embedded agency, and further work on how to model your map as possibly not fitting the territory and how that can be used is still necessary.
Thanks for clarifying.
I mean I do think it can happen in my system that you allocate an object for something that’s actually 0 or >1 objects, and I don’t have a procedure for resolving such map-territory mismatches yet, though I think it’s imaginable to have a procedure that defines new objects and tries to edit all the beliefs associated with the old object.
I definitely haven’t described how we determine when to create a new object to add to our world model, but one could imagine an algorithm checking when there’s some useful latent for explaining some observations, and then constructing a model for that object, and then creating a new object in the abstract reasoning engine. Yeah there’s still open work to do for how a correspondences between the constant symbol for our object and our (e.g. visual) model of the object can be formalized and used, but I don’t see why it wouldn’t be feasible.
I agree that we end up with a map that doesn’t actually fit the territory, but I think it’s fine if there’s a unresolveable mismatch somewhere. There’s still a useful correspondence in most places. (Sure logic would collapse from a contradiction but actually it’s all probabilistic somehow anyways.) Although of course we don’t have anything to describe that the territory is different from the map in our system yet. This is related to embedded agency, and further work on how to model your map as possibly not fitting the territory and how that can be used is still necessary.