From the perspective of this agent, i.e. me, the world and my model of it are very much distinguishable. Especially when the world surprises me by demonstrating that my model was inaccurate. Have you never discovered you were wrong about something? How can this happen if you cannot distinguish your model of the world from the world itself?
You’re absolutely right to focus on the moment the model fails. Updating your model to account for its failures is effectively what learning is. Again if we look at you from the outside we can give an account of the form: The model failed because it did not correspond to reality, so the agent updated it to one which corresponded better to reality (AKA was more true).
But again from the inside there is no access to reality, only the model. Perception and prediction and both mediated by the model itself, and when they contradict each other the model must be adjusted. But that the perceptions come from the ‘real’ external world itself just a feature of the model.
You have the extraordinary ability to change your own model in response to its contradictions. Lets consider the case of agents that can’t do that.
If a roomba is flipped on its back and its wheels keep spinning (I imagine in real life roombas probably have some kind of sensor to deal with these situations but lets assume this one doesn’t), from the outside we can say that the roomba’s model, which says that spinning your wheels makes you move, is no longer in correspondence with reality. But from the point of view of the roomba, all that can be said is that the world has become incomprehensible.
From the perspective of this agent, i.e. me, the world and my model of it are very much distinguishable. Especially when the world surprises me by demonstrating that my model was inaccurate. Have you never discovered you were wrong about something? How can this happen if you cannot distinguish your model of the world from the world itself?
You’re absolutely right to focus on the moment the model fails. Updating your model to account for its failures is effectively what learning is. Again if we look at you from the outside we can give an account of the form: The model failed because it did not correspond to reality, so the agent updated it to one which corresponded better to reality (AKA was more true).
But again from the inside there is no access to reality, only the model. Perception and prediction and both mediated by the model itself, and when they contradict each other the model must be adjusted. But that the perceptions come from the ‘real’ external world itself just a feature of the model.
You have the extraordinary ability to change your own model in response to its contradictions. Lets consider the case of agents that can’t do that.
If a roomba is flipped on its back and its wheels keep spinning (I imagine in real life roombas probably have some kind of sensor to deal with these situations but lets assume this one doesn’t), from the outside we can say that the roomba’s model, which says that spinning your wheels makes you move, is no longer in correspondence with reality. But from the point of view of the roomba, all that can be said is that the world has become incomprehensible.