The thermostat acts in exactly the same way as you do. I is possibly even already conscious (I had completely forgotten this was an established debate and its absolutely not a crux for me). You are much more complex that a thermostat.
I don’t think there is any possibility of a thermostat being conscious. The linked article makes the common error of arguing that wherever there is consciousness we see some phenomenon X, therefore wherever there is X there is consciousness, and if there doesn’t seem to be any, htere muste be consciousness “in a sense”.
The fact that you can consciously implement the control algorithm does not tell us anything about your cognition
Of course. The thermostat controls temperature without being conscious; I can by my own conscious actions also choose to perform the thermostat’s role.
Anyway, all this began with my objecting to “agents” performing time travel, and arguing that whether an unconscious thermostat or a conscious entity such as myself controls the temperature, no time travel is involved. Neither do I achieve a goal merely by predicting that it will be achieved, but by acting to achieve it. Are we disagreeing about anything at this point?
I think that when seen from outside of the agent, your account is correct. But from the perspective of the agent, the world and the world model are indistinguishable, so the relationship between prediction and time is more complex.
From the perspective of this agent, i.e. me, the world and my model of it are very much distinguishable. Especially when the world surprises me by demonstrating that my model was inaccurate. Have you never discovered you were wrong about something? How can this happen if you cannot distinguish your model of the world from the world itself?
You’re absolutely right to focus on the moment the model fails. Updating your model to account for its failures is effectively what learning is. Again if we look at you from the outside we can give an account of the form: The model failed because it did not correspond to reality, so the agent updated it to one which corresponded better to reality (AKA was more true).
But again from the inside there is no access to reality, only the model. Perception and prediction and both mediated by the model itself, and when they contradict each other the model must be adjusted. But that the perceptions come from the ‘real’ external world itself just a feature of the model.
You have the extraordinary ability to change your own model in response to its contradictions. Lets consider the case of agents that can’t do that.
If a roomba is flipped on its back and its wheels keep spinning (I imagine in real life roombas probably have some kind of sensor to deal with these situations but lets assume this one doesn’t), from the outside we can say that the roomba’s model, which says that spinning your wheels makes you move, is no longer in correspondence with reality. But from the point of view of the roomba, all that can be said is that the world has become incomprehensible.
I don’t think there is any possibility of a thermostat being conscious. The linked article makes the common error of arguing that wherever there is consciousness we see some phenomenon X, therefore wherever there is X there is consciousness, and if there doesn’t seem to be any, htere muste be consciousness “in a sense”.
Of course. The thermostat controls temperature without being conscious; I can by my own conscious actions also choose to perform the thermostat’s role.
Anyway, all this began with my objecting to “agents” performing time travel, and arguing that whether an unconscious thermostat or a conscious entity such as myself controls the temperature, no time travel is involved. Neither do I achieve a goal merely by predicting that it will be achieved, but by acting to achieve it. Are we disagreeing about anything at this point?
I think that when seen from outside of the agent, your account is correct. But from the perspective of the agent, the world and the world model are indistinguishable, so the relationship between prediction and time is more complex.
From the perspective of this agent, i.e. me, the world and my model of it are very much distinguishable. Especially when the world surprises me by demonstrating that my model was inaccurate. Have you never discovered you were wrong about something? How can this happen if you cannot distinguish your model of the world from the world itself?
You’re absolutely right to focus on the moment the model fails. Updating your model to account for its failures is effectively what learning is. Again if we look at you from the outside we can give an account of the form: The model failed because it did not correspond to reality, so the agent updated it to one which corresponded better to reality (AKA was more true).
But again from the inside there is no access to reality, only the model. Perception and prediction and both mediated by the model itself, and when they contradict each other the model must be adjusted. But that the perceptions come from the ‘real’ external world itself just a feature of the model.
You have the extraordinary ability to change your own model in response to its contradictions. Lets consider the case of agents that can’t do that.
If a roomba is flipped on its back and its wheels keep spinning (I imagine in real life roombas probably have some kind of sensor to deal with these situations but lets assume this one doesn’t), from the outside we can say that the roomba’s model, which says that spinning your wheels makes you move, is no longer in correspondence with reality. But from the point of view of the roomba, all that can be said is that the world has become incomprehensible.