If we are to imagine the thermostat conscious, that we surely cannot limit that consciousness to only the perception and the reference, but also allow it to see, intend, and perform its own actions. It is not inexorably being pulled, but itself pushing (by turning the heat on and off) towards its goal.
Only if we want to commit ourselves to a homunculus theory of consciousness and a libertarian theory of free will.
If we are to imagine the thermostat conscious, that we surely cannot limit that consciousness to only the perception and the reference, but also allow it to see, intend, and perform its own actions. It is not inexorably being pulled, but itself pushing (by turning the heat on and off) towards its goal.
Only if we want to commit ourselves to a homunculus theory of consciousness and a libertarian theory of free will.
You introduced the homunculus by imagining the thermostat conscious. I responded by pointing out that if it’s going to be aware of its perception and reference, there is no reason to exclude the rest of the show.
But of course the thermostat is not conscious.
I am. When I act as the thermostat, I am perceiving the temperature of the room, and the temperature I want it to be, and I decide and act to turn the heat on or off accordingly. There is no homunculus here, nor “libertarian free will”, whatever that is, just a description of my conscious experience and actions. To dismiss this as a homunculus theory is to dismiss the very idea of consciousness.
And some people do that. Do you? They assert that there is no such thing as consciousness, or a mind, or subjective experience. These are not even illusions, for that would imply an experiencer of the illusion, and there is no experience. For such people, all talk of these things is simply a mistake. If you are one of these people, then I don’t think the conversation can proceed any further. From my point of view you would be a blind man denying the existence and even the idea of sight.
Or perhaps you grant consciousness the ability to observe, but not to do? In imagination you grant the thermostat the ability to perceive, but not to do, supposing that the latter would require the nonexistent magic called “libertarian free will”. But epiphenomenal consciousness is as incoherent a notion as p-zombies. How can a thing exist that has no effect on any physical object, yet we talk about it (which is a physical action)?
I’m just guessing at your views here.
So does the delta between goal and perception cause the action directly?
For the thermostat (assuming the bimetallic strip type), the reference is the position of a pair of contacts either side of the strip, the temperature causes the curvature of the strip, which makes or breaks the contacts, which turns the heating on or off. This is all physically well understood. There is nothing problematic here.
For me acting as the thermostat, I perceive the delta, and act accordingly. I don’t see anything problematic here either. The sage is not above causation, nor subject to causation, but one with causation. As are we all, whether we are sages or not.
A postscript on the Hard Problem.
In the background there is the Hard Problem of Consciousness, which no-one has a solution for, nor has even yet imagined what a solution could possibly look like. But all too often people respond to this enigma by arguing, only magic could cross the divide, magic does not exist, therefore consciousness does not exist. But the limits of what I understand are not the limits of the world.
I don’t think thermostat consciousness would require homunculi any more than human consciousness does but I think it was a mistake on my part to use the word consciousness as it inevitably complicates things rather than simplifying them (although FWIW I do agree that consciousness exists and is not an epiphenomenon).
For the thermostat (assuming the bimetallic strip type), the reference is the position of a pair of contacts either side of the strip, the temperature causes the curvature of the strip, which makes or breaks the contacts, which turns the heating on or off. This is all physically well understood. There is nothing problematic here.
For me acting as the thermostat, I perceive the delta, and act accordingly. I don’t see anything problematic here either. The sage is not above causation, nor subject to causation, but one with causation. As are we all, whether we are sages or not.
The thermostat too is one with causation. The thermostat acts in exactly the same way as you do. I is possibly even already conscious (I had completely forgotten this was an established debate and its absolutely not a crux for me). You are much more complex that a thermostat.
I think there is something a bit misleading about your example of a person regulating temperature in their house manually. The fact that you can consciously implement the control algorithm does not tell us anything about your cognition or even your decision making process since you can also implement pretty much any other algorithm (you are more or less turing complete subject to finiteness etc.). PCT is a theory of cognition, not simply of decision making.
The thermostat acts in exactly the same way as you do. I is possibly even already conscious (I had completely forgotten this was an established debate and its absolutely not a crux for me). You are much more complex that a thermostat.
I don’t think there is any possibility of a thermostat being conscious. The linked article makes the common error of arguing that wherever there is consciousness we see some phenomenon X, therefore wherever there is X there is consciousness, and if there doesn’t seem to be any, htere muste be consciousness “in a sense”.
The fact that you can consciously implement the control algorithm does not tell us anything about your cognition
Of course. The thermostat controls temperature without being conscious; I can by my own conscious actions also choose to perform the thermostat’s role.
Anyway, all this began with my objecting to “agents” performing time travel, and arguing that whether an unconscious thermostat or a conscious entity such as myself controls the temperature, no time travel is involved. Neither do I achieve a goal merely by predicting that it will be achieved, but by acting to achieve it. Are we disagreeing about anything at this point?
I think that when seen from outside of the agent, your account is correct. But from the perspective of the agent, the world and the world model are indistinguishable, so the relationship between prediction and time is more complex.
From the perspective of this agent, i.e. me, the world and my model of it are very much distinguishable. Especially when the world surprises me by demonstrating that my model was inaccurate. Have you never discovered you were wrong about something? How can this happen if you cannot distinguish your model of the world from the world itself?
You’re absolutely right to focus on the moment the model fails. Updating your model to account for its failures is effectively what learning is. Again if we look at you from the outside we can give an account of the form: The model failed because it did not correspond to reality, so the agent updated it to one which corresponded better to reality (AKA was more true).
But again from the inside there is no access to reality, only the model. Perception and prediction and both mediated by the model itself, and when they contradict each other the model must be adjusted. But that the perceptions come from the ‘real’ external world itself just a feature of the model.
You have the extraordinary ability to change your own model in response to its contradictions. Lets consider the case of agents that can’t do that.
If a roomba is flipped on its back and its wheels keep spinning (I imagine in real life roombas probably have some kind of sensor to deal with these situations but lets assume this one doesn’t), from the outside we can say that the roomba’s model, which says that spinning your wheels makes you move, is no longer in correspondence with reality. But from the point of view of the roomba, all that can be said is that the world has become incomprehensible.
Only if we want to commit ourselves to a homunculus theory of consciousness and a libertarian theory of free will.
(Also a reply to your parallel comment.)
You introduced the homunculus by imagining the thermostat conscious. I responded by pointing out that if it’s going to be aware of its perception and reference, there is no reason to exclude the rest of the show.
But of course the thermostat is not conscious.
I am. When I act as the thermostat, I am perceiving the temperature of the room, and the temperature I want it to be, and I decide and act to turn the heat on or off accordingly. There is no homunculus here, nor “libertarian free will”, whatever that is, just a description of my conscious experience and actions. To dismiss this as a homunculus theory is to dismiss the very idea of consciousness.
And some people do that. Do you? They assert that there is no such thing as consciousness, or a mind, or subjective experience. These are not even illusions, for that would imply an experiencer of the illusion, and there is no experience. For such people, all talk of these things is simply a mistake. If you are one of these people, then I don’t think the conversation can proceed any further. From my point of view you would be a blind man denying the existence and even the idea of sight.
Or perhaps you grant consciousness the ability to observe, but not to do? In imagination you grant the thermostat the ability to perceive, but not to do, supposing that the latter would require the nonexistent magic called “libertarian free will”. But epiphenomenal consciousness is as incoherent a notion as p-zombies. How can a thing exist that has no effect on any physical object, yet we talk about it (which is a physical action)?
I’m just guessing at your views here.
For the thermostat (assuming the bimetallic strip type), the reference is the position of a pair of contacts either side of the strip, the temperature causes the curvature of the strip, which makes or breaks the contacts, which turns the heating on or off. This is all physically well understood. There is nothing problematic here.
For me acting as the thermostat, I perceive the delta, and act accordingly. I don’t see anything problematic here either. The sage is not above causation, nor subject to causation, but one with causation. As are we all, whether we are sages or not.
A postscript on the Hard Problem.
In the background there is the Hard Problem of Consciousness, which no-one has a solution for, nor has even yet imagined what a solution could possibly look like. But all too often people respond to this enigma by arguing, only magic could cross the divide, magic does not exist, therefore consciousness does not exist. But the limits of what I understand are not the limits of the world.
I don’t think thermostat consciousness would require homunculi any more than human consciousness does but I think it was a mistake on my part to use the word consciousness as it inevitably complicates things rather than simplifying them (although FWIW I do agree that consciousness exists and is not an epiphenomenon).
The thermostat too is one with causation. The thermostat acts in exactly the same way as you do. I is possibly even already conscious (I had completely forgotten this was an established debate and its absolutely not a crux for me). You are much more complex that a thermostat.
I think there is something a bit misleading about your example of a person regulating temperature in their house manually. The fact that you can consciously implement the control algorithm does not tell us anything about your cognition or even your decision making process since you can also implement pretty much any other algorithm (you are more or less turing complete subject to finiteness etc.). PCT is a theory of cognition, not simply of decision making.
I don’t think there is any possibility of a thermostat being conscious. The linked article makes the common error of arguing that wherever there is consciousness we see some phenomenon X, therefore wherever there is X there is consciousness, and if there doesn’t seem to be any, htere muste be consciousness “in a sense”.
Of course. The thermostat controls temperature without being conscious; I can by my own conscious actions also choose to perform the thermostat’s role.
Anyway, all this began with my objecting to “agents” performing time travel, and arguing that whether an unconscious thermostat or a conscious entity such as myself controls the temperature, no time travel is involved. Neither do I achieve a goal merely by predicting that it will be achieved, but by acting to achieve it. Are we disagreeing about anything at this point?
I think that when seen from outside of the agent, your account is correct. But from the perspective of the agent, the world and the world model are indistinguishable, so the relationship between prediction and time is more complex.
From the perspective of this agent, i.e. me, the world and my model of it are very much distinguishable. Especially when the world surprises me by demonstrating that my model was inaccurate. Have you never discovered you were wrong about something? How can this happen if you cannot distinguish your model of the world from the world itself?
You’re absolutely right to focus on the moment the model fails. Updating your model to account for its failures is effectively what learning is. Again if we look at you from the outside we can give an account of the form: The model failed because it did not correspond to reality, so the agent updated it to one which corresponded better to reality (AKA was more true).
But again from the inside there is no access to reality, only the model. Perception and prediction and both mediated by the model itself, and when they contradict each other the model must be adjusted. But that the perceptions come from the ‘real’ external world itself just a feature of the model.
You have the extraordinary ability to change your own model in response to its contradictions. Lets consider the case of agents that can’t do that.
If a roomba is flipped on its back and its wheels keep spinning (I imagine in real life roombas probably have some kind of sensor to deal with these situations but lets assume this one doesn’t), from the outside we can say that the roomba’s model, which says that spinning your wheels makes you move, is no longer in correspondence with reality. But from the point of view of the roomba, all that can be said is that the world has become incomprehensible.