As an agent when you think about the aspects of the future that you yourself may be able to influence, your predictions have to factor in what actions you will take. But your choices about what actions to take will in turn be influenced by these predictions.
To achieve an equilibrium you are restricted to predicting a future such that your predicting it will not cause you to also predict yourself taking actions that will prevent it (otherwise you would have to change predictions).
Do you have an example for that? It seems to me you’re describing circular process, in which you’d naturally look for stable equilibria. Basically prediction will influence action, action will influence prediction, something like that. But I don’t quite get it how the circle works.
Say I’m the agent faced with a decision. I have some options, I think through the possible consequences of each, and I choose the option that leads to the best outcome according to some metric. I feel it would be fair to say that the predictions I’m making about the future determine which choice I’ll make.
What I don’t see is how the choice I end up making influences my prediction about the future. From my perspective the first step is predicting all possible futures and the second step execution the action that leads to the best future. Whatever option I end up selecting, it was already reasoned through beforehand, as were all the other options I ended up not selecting. Where’s the feedback loop?
.I will try to give an example (but let me also say that I am just learning this stuff myself so take it with a pinch of salt).
Lets say that I throw a ball at you.
You have to answer two questions:
what is going to happen?
what am I going to do?
Observe that the answers to each of these questions affects each other...
Your answer to question 1 affects your answer to question 2 because you have preferences that you act on, for example if your prediction is that the ball is going to hit your face, then you will pick an action that prevents that such as raising your hand to catch it. (adding this at the end: if I had picked a better example the outcome could also have a much more direct effect on you independent of preference e.g. if the ball is heavy and moving fast its clear that you wont be able to just stand there when it hits you)
Your answer to question to 2 affects your answer to question 1, because your actions have causal power. For example if you raise your hand, the ball won’t hit your face.
So ultimately your tuple of answers to questions 1 and 2 should be compatible with each other, in the sense that your answer to 1 does not make you do something other than 2, and your answer to 2 does not cause an outcome other than 1.
For example you can predict that you will catch the ball and act to catch it.
Of course you are right that we could model this situation differently: you have some possible actions with predicted consequences and expected utilities and you pick the one that maximise expected utility etc.
So why might we prefer the prior model? I don’t have a fully formed answer for you here (ultimately it will come den to which one is more useful/predictive/etc.) but a few tentative reasons might be:
I don’t understand this example. If someone throws a ball to me, I can try to catch it, try to dodge it, let it go whizzing over my head, and so on, and I will have some idea of how things will develop depending on my choices, but you seem to be adding an unnecessary extra level onto this that I don’t follow.
I am claiming (weakly) that the actual process looks less like:
enumerate possible actions
predict their respective outcomes
choose the best
and more like
come up with possible actions and outcomes in an ad hoc way
back and forward chain from them until a pair meet in the middle
I was going to write a whole argument about how the kind of decision theoretical procedure you are describing is something you can choose to do at the conscious level, but not something you actually cognitively do by default but then I saw you basically already wrote the same argument here.
Consider the ball scenario from the perspective of perceptual control theory (or active inference):
When you first see the ball your baseline is probably just something like not getting hit. But on its own this does not really give any signal for how to act, so you need to refine your baseline to something more specific. What baseline will you pick? Out of the space of possible futures in which you don’t get hit by the ball there are many choices available to you:
You could pick one in which the wind blows the ball to the side, but you cant control that so it wont help much.
You could pick a future that you don’t actually have the motor skills to do, such as leaping into the air an kung-fu kicking the ball away. You start jumping but then you realise you dont know kung-fu, and the ball hits you in the balls!
You could pick a future in which you catch the ball, and do so (or you could still fail).
All of this does not happen discretely but over time. The ball is approaching. You are starting to move based on some your current baseline or some average you are still considering. As this goes on the space of possible futures is being changed by your actions and by the ball’s approach. Maybe its too late to raise you hand? Maybe its too late to duck? Maybe there’s still time to flinch?
All of this is to say that to successfully do something deliberately, your goal must have the property that when used as a reference your perceptions will actually converge there (stability).
Lets look go back to your example of the thermostat:
From you perspective as an outsider there is a clear forward causal series of events. But how should the thermostat itself (to which I am magically granting the gift of consciousness) think about future?
From the point of view of the thermostat, the set temperature is its destiny to which it is inexorably being pulled. In other words it is the only goal it can possibly hope to pursue.
Of course as outsiders we know we can open the window and deny the thermostat this future. But the thermostat itself knows nothing of windows, they are outside of its world model and outside of its control.
All of this is to say that to successfully do something deliberately, your goal must have the property that when used as a reference your perceptions will actually converge there (stability).
For all that people talk of agents and agentiness, their conceptions are often curiously devoid of agency, with “agents” merely predicting outcomes and (to them) magically finding themselves converging there, unaware that they are taking any actions to steer the future where they want it to go. But what brings the perception towards the goal is not the goal, but the way that the actions depend on the difference.
From the point of view of the thermostat, the set temperature is its destiny to which it is inexorably being pulled. In other words it is the only goal it can possibly hope to pursue.
If we are to imagine the thermostat conscious, that we surely cannot limit that consciousness to only the perception and the reference, but also allow it to see, intend, and perform its own actions. It is not inexorably being pulled, but itself pushing (by turning the heat on and off) towards its goal.
Some of the heaters in my house do not have thermostats, in which case I’m the thermostat. I turn the heater on when I find the room too cold and turn it off when it’s too warm. This is exactly what a thermostat would be doing, except that it can’t think about it.
For all that people talk of agents and agentiness, their conceptions are often curiously devoid of agency, with “agents” merely predicting outcomes and (to them) magically finding themselves converging there, unaware that they are taking any actions to steer the future where they want it to go. But what brings the perception towards the goal is not the goal, but the way that the actions depend on the difference.
So does the delta between goal and perception cause the action directly? Or does it require “you” to become aware of that delta and then chose the corresponding action?
If I understand correctly you are arguing for the latter it which case this seems like homunculus fallacy. How does “you” decide what actions to pick?
If we are to imagine the thermostat conscious, that we surely cannot limit that consciousness to only the perception and the reference, but also allow it to see, intend, and perform its own actions. It is not inexorably being pulled, but itself pushing (by turning the heat on and off) towards its goal.
Only if we want to commit ourselves to a homunculus theory of consciousness and a libertarian theory of free will.
If we are to imagine the thermostat conscious, that we surely cannot limit that consciousness to only the perception and the reference, but also allow it to see, intend, and perform its own actions. It is not inexorably being pulled, but itself pushing (by turning the heat on and off) towards its goal.
Only if we want to commit ourselves to a homunculus theory of consciousness and a libertarian theory of free will.
You introduced the homunculus by imagining the thermostat conscious. I responded by pointing out that if it’s going to be aware of its perception and reference, there is no reason to exclude the rest of the show.
But of course the thermostat is not conscious.
I am. When I act as the thermostat, I am perceiving the temperature of the room, and the temperature I want it to be, and I decide and act to turn the heat on or off accordingly. There is no homunculus here, nor “libertarian free will”, whatever that is, just a description of my conscious experience and actions. To dismiss this as a homunculus theory is to dismiss the very idea of consciousness.
And some people do that. Do you? They assert that there is no such thing as consciousness, or a mind, or subjective experience. These are not even illusions, for that would imply an experiencer of the illusion, and there is no experience. For such people, all talk of these things is simply a mistake. If you are one of these people, then I don’t think the conversation can proceed any further. From my point of view you would be a blind man denying the existence and even the idea of sight.
Or perhaps you grant consciousness the ability to observe, but not to do? In imagination you grant the thermostat the ability to perceive, but not to do, supposing that the latter would require the nonexistent magic called “libertarian free will”. But epiphenomenal consciousness is as incoherent a notion as p-zombies. How can a thing exist that has no effect on any physical object, yet we talk about it (which is a physical action)?
I’m just guessing at your views here.
So does the delta between goal and perception cause the action directly?
For the thermostat (assuming the bimetallic strip type), the reference is the position of a pair of contacts either side of the strip, the temperature causes the curvature of the strip, which makes or breaks the contacts, which turns the heating on or off. This is all physically well understood. There is nothing problematic here.
For me acting as the thermostat, I perceive the delta, and act accordingly. I don’t see anything problematic here either. The sage is not above causation, nor subject to causation, but one with causation. As are we all, whether we are sages or not.
A postscript on the Hard Problem.
In the background there is the Hard Problem of Consciousness, which no-one has a solution for, nor has even yet imagined what a solution could possibly look like. But all too often people respond to this enigma by arguing, only magic could cross the divide, magic does not exist, therefore consciousness does not exist. But the limits of what I understand are not the limits of the world.
I don’t think thermostat consciousness would require homunculi any more than human consciousness does but I think it was a mistake on my part to use the word consciousness as it inevitably complicates things rather than simplifying them (although FWIW I do agree that consciousness exists and is not an epiphenomenon).
For the thermostat (assuming the bimetallic strip type), the reference is the position of a pair of contacts either side of the strip, the temperature causes the curvature of the strip, which makes or breaks the contacts, which turns the heating on or off. This is all physically well understood. There is nothing problematic here.
For me acting as the thermostat, I perceive the delta, and act accordingly. I don’t see anything problematic here either. The sage is not above causation, nor subject to causation, but one with causation. As are we all, whether we are sages or not.
The thermostat too is one with causation. The thermostat acts in exactly the same way as you do. I is possibly even already conscious (I had completely forgotten this was an established debate and its absolutely not a crux for me). You are much more complex that a thermostat.
I think there is something a bit misleading about your example of a person regulating temperature in their house manually. The fact that you can consciously implement the control algorithm does not tell us anything about your cognition or even your decision making process since you can also implement pretty much any other algorithm (you are more or less turing complete subject to finiteness etc.). PCT is a theory of cognition, not simply of decision making.
The thermostat acts in exactly the same way as you do. I is possibly even already conscious (I had completely forgotten this was an established debate and its absolutely not a crux for me). You are much more complex that a thermostat.
I don’t think there is any possibility of a thermostat being conscious. The linked article makes the common error of arguing that wherever there is consciousness we see some phenomenon X, therefore wherever there is X there is consciousness, and if there doesn’t seem to be any, htere muste be consciousness “in a sense”.
The fact that you can consciously implement the control algorithm does not tell us anything about your cognition
Of course. The thermostat controls temperature without being conscious; I can by my own conscious actions also choose to perform the thermostat’s role.
Anyway, all this began with my objecting to “agents” performing time travel, and arguing that whether an unconscious thermostat or a conscious entity such as myself controls the temperature, no time travel is involved. Neither do I achieve a goal merely by predicting that it will be achieved, but by acting to achieve it. Are we disagreeing about anything at this point?
Do you have an example for that? It seems to me you’re describing circular process, in which you’d naturally look for stable equilibria. Basically prediction will influence action, action will influence prediction, something like that. But I don’t quite get it how the circle works.
Say I’m the agent faced with a decision. I have some options, I think through the possible consequences of each, and I choose the option that leads to the best outcome according to some metric. I feel it would be fair to say that the predictions I’m making about the future determine which choice I’ll make.
What I don’t see is how the choice I end up making influences my prediction about the future. From my perspective the first step is predicting all possible futures and the second step execution the action that leads to the best future. Whatever option I end up selecting, it was already reasoned through beforehand, as were all the other options I ended up not selecting. Where’s the feedback loop?
.I will try to give an example (but let me also say that I am just learning this stuff myself so take it with a pinch of salt).
Lets say that I throw a ball at you.
You have to answer two questions:
what is going to happen?
what am I going to do?
Observe that the answers to each of these questions affects each other...
Your answer to question 1 affects your answer to question 2 because you have preferences that you act on, for example if your prediction is that the ball is going to hit your face, then you will pick an action that prevents that such as raising your hand to catch it. (adding this at the end: if I had picked a better example the outcome could also have a much more direct effect on you independent of preference e.g. if the ball is heavy and moving fast its clear that you wont be able to just stand there when it hits you)
Your answer to question to 2 affects your answer to question 1, because your actions have causal power. For example if you raise your hand, the ball won’t hit your face.
So ultimately your tuple of answers to questions 1 and 2 should be compatible with each other, in the sense that your answer to 1 does not make you do something other than 2, and your answer to 2 does not cause an outcome other than 1.
For example you can predict that you will catch the ball and act to catch it.
Of course you are right that we could model this situation differently: you have some possible actions with predicted consequences and expected utilities and you pick the one that maximise expected utility etc.
So why might we prefer the prior model? I don’t have a fully formed answer for you here (ultimately it will come den to which one is more useful/predictive/etc.) but a few tentative reasons might be:
it better corresponds to Embedded Agency
It fits with an active inference model of cognition
It better fits with our experience of making decisions (especially fast ones such as the example)
I don’t understand this example. If someone throws a ball to me, I can try to catch it, try to dodge it, let it go whizzing over my head, and so on, and I will have some idea of how things will develop depending on my choices, but you seem to be adding an unnecessary extra level onto this that I don’t follow.
I am claiming (weakly) that the actual process looks less like:
enumerate possible actions
predict their respective outcomes
choose the best
and more like
come up with possible actions and outcomes in an ad hoc way
back and forward chain from them until a pair meet in the middle
I was going to write a whole argument about how the kind of decision theoretical procedure you are describing is something you can choose to do at the conscious level, but not something you actually cognitively do by default but then I saw you basically already wrote the same argument here.
Consider the ball scenario from the perspective of perceptual control theory (or active inference):
When you first see the ball your baseline is probably just something like not getting hit.
But on its own this does not really give any signal for how to act, so you need to refine your baseline to something more specific. What baseline will you pick? Out of the space of possible futures in which you don’t get hit by the ball there are many choices available to you:
You could pick one in which the wind blows the ball to the side, but you cant control that so it wont help much.
You could pick a future that you don’t actually have the motor skills to do, such as leaping into the air an kung-fu kicking the ball away. You start jumping but then you realise you dont know kung-fu, and the ball hits you in the balls!
You could pick a future in which you catch the ball, and do so (or you could still fail).
All of this does not happen discretely but over time. The ball is approaching. You are starting to move based on some your current baseline or some average you are still considering. As this goes on the space of possible futures is being changed by your actions and by the ball’s approach. Maybe its too late to raise you hand? Maybe its too late to duck? Maybe there’s still time to flinch?
All of this is to say that to successfully do something deliberately, your goal must have the property that when used as a reference your perceptions will actually converge there (stability).
Lets look go back to your example of the thermostat:
From you perspective as an outsider there is a clear forward causal series of events. But how should the thermostat itself (to which I am magically granting the gift of consciousness) think about future?
From the point of view of the thermostat, the set temperature is its destiny to which it is inexorably being pulled. In other words it is the only goal it can possibly hope to pursue.
Of course as outsiders we know we can open the window and deny the thermostat this future. But the thermostat itself knows nothing of windows, they are outside of its world model and outside of its control.
For all that people talk of agents and agentiness, their conceptions are often curiously devoid of agency, with “agents” merely predicting outcomes and (to them) magically finding themselves converging there, unaware that they are taking any actions to steer the future where they want it to go. But what brings the perception towards the goal is not the goal, but the way that the actions depend on the difference.
If we are to imagine the thermostat conscious, that we surely cannot limit that consciousness to only the perception and the reference, but also allow it to see, intend, and perform its own actions. It is not inexorably being pulled, but itself pushing (by turning the heat on and off) towards its goal.
Some of the heaters in my house do not have thermostats, in which case I’m the thermostat. I turn the heater on when I find the room too cold and turn it off when it’s too warm. This is exactly what a thermostat would be doing, except that it can’t think about it.
So does the delta between goal and perception cause the action directly?
Or does it require “you” to become aware of that delta and then chose the corresponding action?
If I understand correctly you are arguing for the latter it which case this seems like homunculus fallacy. How does “you” decide what actions to pick?
Only if we want to commit ourselves to a homunculus theory of consciousness and a libertarian theory of free will.
(Also a reply to your parallel comment.)
You introduced the homunculus by imagining the thermostat conscious. I responded by pointing out that if it’s going to be aware of its perception and reference, there is no reason to exclude the rest of the show.
But of course the thermostat is not conscious.
I am. When I act as the thermostat, I am perceiving the temperature of the room, and the temperature I want it to be, and I decide and act to turn the heat on or off accordingly. There is no homunculus here, nor “libertarian free will”, whatever that is, just a description of my conscious experience and actions. To dismiss this as a homunculus theory is to dismiss the very idea of consciousness.
And some people do that. Do you? They assert that there is no such thing as consciousness, or a mind, or subjective experience. These are not even illusions, for that would imply an experiencer of the illusion, and there is no experience. For such people, all talk of these things is simply a mistake. If you are one of these people, then I don’t think the conversation can proceed any further. From my point of view you would be a blind man denying the existence and even the idea of sight.
Or perhaps you grant consciousness the ability to observe, but not to do? In imagination you grant the thermostat the ability to perceive, but not to do, supposing that the latter would require the nonexistent magic called “libertarian free will”. But epiphenomenal consciousness is as incoherent a notion as p-zombies. How can a thing exist that has no effect on any physical object, yet we talk about it (which is a physical action)?
I’m just guessing at your views here.
For the thermostat (assuming the bimetallic strip type), the reference is the position of a pair of contacts either side of the strip, the temperature causes the curvature of the strip, which makes or breaks the contacts, which turns the heating on or off. This is all physically well understood. There is nothing problematic here.
For me acting as the thermostat, I perceive the delta, and act accordingly. I don’t see anything problematic here either. The sage is not above causation, nor subject to causation, but one with causation. As are we all, whether we are sages or not.
A postscript on the Hard Problem.
In the background there is the Hard Problem of Consciousness, which no-one has a solution for, nor has even yet imagined what a solution could possibly look like. But all too often people respond to this enigma by arguing, only magic could cross the divide, magic does not exist, therefore consciousness does not exist. But the limits of what I understand are not the limits of the world.
I don’t think thermostat consciousness would require homunculi any more than human consciousness does but I think it was a mistake on my part to use the word consciousness as it inevitably complicates things rather than simplifying them (although FWIW I do agree that consciousness exists and is not an epiphenomenon).
The thermostat too is one with causation. The thermostat acts in exactly the same way as you do. I is possibly even already conscious (I had completely forgotten this was an established debate and its absolutely not a crux for me). You are much more complex that a thermostat.
I think there is something a bit misleading about your example of a person regulating temperature in their house manually. The fact that you can consciously implement the control algorithm does not tell us anything about your cognition or even your decision making process since you can also implement pretty much any other algorithm (you are more or less turing complete subject to finiteness etc.). PCT is a theory of cognition, not simply of decision making.
I don’t think there is any possibility of a thermostat being conscious. The linked article makes the common error of arguing that wherever there is consciousness we see some phenomenon X, therefore wherever there is X there is consciousness, and if there doesn’t seem to be any, htere muste be consciousness “in a sense”.
Of course. The thermostat controls temperature without being conscious; I can by my own conscious actions also choose to perform the thermostat’s role.
Anyway, all this began with my objecting to “agents” performing time travel, and arguing that whether an unconscious thermostat or a conscious entity such as myself controls the temperature, no time travel is involved. Neither do I achieve a goal merely by predicting that it will be achieved, but by acting to achieve it. Are we disagreeing about anything at this point?