Time travel, in the classic sense, has no place in rational theory[3] but, through predictions, information can have retrocausal effects.
[...] agency is time travel. An agent is a mechanism through which the future is able to affect the past. An agent models the future consequences of its actions, and chooses actions on the basis of those consequences. In that sense, the consequence causes the action, in spite of the fact that the action comes earlier in the standard physical sense.
― Scott Garrabrant, Saving Time (MIRI Agent Foundations research[4])
Feedback loops are not retrocausal. When you turn on a thermostat, then some time thereafter, the room will be at about the temperature it says on the dial. That future temperature is not causing itself, what is causing it is the thermostat’s present sensing of the difference between the reference temperature and the actual temperature, and the consequent turning on of the heat source. If there’s a window wide open and it’s very cold outside, the heat source may not be powerful enough to bring the room up to the reference temperature, and that temperature will not be reached. Can this supposed Tardis be defeated just by opening a window?
The consequence does not cause the action. It does not even behave as if it causes the action. Here the consequence varies according to the state of the window, while the action (keep the heating on while the temperature is below the reference) is the same, regardless of whether it is going to succeed.
For agents that think about and make predictions about the future (as the thermostat does not), what causes the agent’s actions is its present ideas about those consequences. Those present ideas are not obtained from the future, but from the agent’s present knowledge. Nothing comes from the future. If there is the equivalent of an open window, frustrating the agent’s plans, and the agent does not know of it, then they will execute their plans and the predicted consequence will not happen. The philosopher Robert Burns wrote a well-known essay on this point.
To the extent that they accurately model the future (based on data from their past and compute from their present[5]),
Yes.
agents allow information from possible futures to flow through them into the present.
No. The thermostat has no knowledge of its eventual success or failure. An agent may do its best to predict the outcome of its plans, but is also not in communication with the future. How much easier everything would be, if we could literally see the future consequences of our present actions, instead of guessing as best we can from present knowledge! But things do not work that way.
The Nick Land article you linked describes him as telling “theory fiction”, a term I take as parallel to “science fiction”. That is, invent stuff and ask, what if? (The “what if” part is what distinguishes it from mundane fiction.) But if the departure point from reality is too great a rupture, all you can get from it is an entertaining story, not something of any relevance to the real world. “1984″ was a warning; the Lensman novels were inspirational entertainment; the Cthulhu mythos is pure entertainment.
There’s nothing wrong with entertainment. But it is fictional evidence only, even when presented in the outward form of a philosophical treatise instead of a novel.
ETA: I see that twopeople already made the same point commenting on the linked Garrabrant article, but they did not receive a response. In the same place, I think this also touches on the same problem.
One of the comments you linked has an edit showing they updated towards this position.
This is a non-trivial insight and reframe, and I’m not going to try and write a better explanation than Scott and Magdalena. But, if you take the time to get it and respond with clear understanding of the frame I’m open to taking a shot at answering stuff.
I don’t believe you have something to gain by insisting on using the word “time” in a technical jargon sense—or do you mean something different than “if self-fulfilling prophecies can be seen as choosing one of imagined scenarios, and you imagine there are agents in those scenarios, you can also imagine as if those future agents will-have-influenced your decision today, as if they acted retro-causally”? Is there a need for an actual non-physical philosophy that is not just a metaphor?
There’s a non trivial conceptual clarification / deconfusion gained by FFS on top of the summary you made there. I put decent odds on this clarification being necessary for some approaches to strongly scalable technical alignment.
(a strong opinion held weakly, not a rigorous attempt to refute anything, just to illustrate my stance) TypeError: obviously, any correct data structure for this shape of the problem must be approximating an infinite set (Bayesian), thus must be implemented lazy/generative, thus must be learnable, thus must be redundant and cannot possibly be factored ¯\_(ツ)_/¯
also, strong alignment is impossible and under the observation that we live in the least dignified world, so doom will be forward-caused by someone who thinks alignment is possible and makes a mistake:
As an agent when you think about the aspects of the future that you yourself may be able to influence, your predictions have to factor in what actions you will take. But your choices about what actions to take will in turn be influenced by these predictions. To achieve an equilibrium you are restricted to predicting a future such that your predicting it will not cause you to also predict yourself taking actions that will prevent it (otherwise you would have to change predictions).
In other words you must predict a future such that your predicting it also causes you to predict that you will make it happen, Such as future is an ‘attractor’.
Of course you might say that from an external/objective point of view it is the conception of possible futures that is acting with causal force, not the actual possible futures. But you are an embedded agent, you can only observe from the inside, and your conception of the future is the only sense in which you can conceive of it.
as a sidenote:
The Nick Land article you linked describes him as telling “theory fiction”, a term I take as parallel to “science fiction”.
Theory fiction is a pretty fuzzy term, but this is definitely an unfair characterisation. It’s something more like a piece of fiction that makes a contribution to (philosophical/political) theory.
As an agent when you think about the aspects of the future that you yourself may be able to influence, your predictions have to factor in what actions you will take. But your choices about what actions to take will in turn be influenced by these predictions.
To achieve an equilibrium you are restricted to predicting a future such that your predicting it will not cause you to also predict yourself taking actions that will prevent it (otherwise you would have to change predictions).
Do you have an example for that? It seems to me you’re describing circular process, in which you’d naturally look for stable equilibria. Basically prediction will influence action, action will influence prediction, something like that. But I don’t quite get it how the circle works.
Say I’m the agent faced with a decision. I have some options, I think through the possible consequences of each, and I choose the option that leads to the best outcome according to some metric. I feel it would be fair to say that the predictions I’m making about the future determine which choice I’ll make.
What I don’t see is how the choice I end up making influences my prediction about the future. From my perspective the first step is predicting all possible futures and the second step execution the action that leads to the best future. Whatever option I end up selecting, it was already reasoned through beforehand, as were all the other options I ended up not selecting. Where’s the feedback loop?
.I will try to give an example (but let me also say that I am just learning this stuff myself so take it with a pinch of salt).
Lets say that I throw a ball at you.
You have to answer two questions:
what is going to happen?
what am I going to do?
Observe that the answers to each of these questions affects each other...
Your answer to question 1 affects your answer to question 2 because you have preferences that you act on, for example if your prediction is that the ball is going to hit your face, then you will pick an action that prevents that such as raising your hand to catch it. (adding this at the end: if I had picked a better example the outcome could also have a much more direct effect on you independent of preference e.g. if the ball is heavy and moving fast its clear that you wont be able to just stand there when it hits you)
Your answer to question to 2 affects your answer to question 1, because your actions have causal power. For example if you raise your hand, the ball won’t hit your face.
So ultimately your tuple of answers to questions 1 and 2 should be compatible with each other, in the sense that your answer to 1 does not make you do something other than 2, and your answer to 2 does not cause an outcome other than 1.
For example you can predict that you will catch the ball and act to catch it.
Of course you are right that we could model this situation differently: you have some possible actions with predicted consequences and expected utilities and you pick the one that maximise expected utility etc.
So why might we prefer the prior model? I don’t have a fully formed answer for you here (ultimately it will come den to which one is more useful/predictive/etc.) but a few tentative reasons might be:
I don’t understand this example. If someone throws a ball to me, I can try to catch it, try to dodge it, let it go whizzing over my head, and so on, and I will have some idea of how things will develop depending on my choices, but you seem to be adding an unnecessary extra level onto this that I don’t follow.
I am claiming (weakly) that the actual process looks less like:
enumerate possible actions
predict their respective outcomes
choose the best
and more like
come up with possible actions and outcomes in an ad hoc way
back and forward chain from them until a pair meet in the middle
I was going to write a whole argument about how the kind of decision theoretical procedure you are describing is something you can choose to do at the conscious level, but not something you actually cognitively do by default but then I saw you basically already wrote the same argument here.
Consider the ball scenario from the perspective of perceptual control theory (or active inference):
When you first see the ball your baseline is probably just something like not getting hit. But on its own this does not really give any signal for how to act, so you need to refine your baseline to something more specific. What baseline will you pick? Out of the space of possible futures in which you don’t get hit by the ball there are many choices available to you:
You could pick one in which the wind blows the ball to the side, but you cant control that so it wont help much.
You could pick a future that you don’t actually have the motor skills to do, such as leaping into the air an kung-fu kicking the ball away. You start jumping but then you realise you dont know kung-fu, and the ball hits you in the balls!
You could pick a future in which you catch the ball, and do so (or you could still fail).
All of this does not happen discretely but over time. The ball is approaching. You are starting to move based on some your current baseline or some average you are still considering. As this goes on the space of possible futures is being changed by your actions and by the ball’s approach. Maybe its too late to raise you hand? Maybe its too late to duck? Maybe there’s still time to flinch?
All of this is to say that to successfully do something deliberately, your goal must have the property that when used as a reference your perceptions will actually converge there (stability).
Lets look go back to your example of the thermostat:
From you perspective as an outsider there is a clear forward causal series of events. But how should the thermostat itself (to which I am magically granting the gift of consciousness) think about future?
From the point of view of the thermostat, the set temperature is its destiny to which it is inexorably being pulled. In other words it is the only goal it can possibly hope to pursue.
Of course as outsiders we know we can open the window and deny the thermostat this future. But the thermostat itself knows nothing of windows, they are outside of its world model and outside of its control.
All of this is to say that to successfully do something deliberately, your goal must have the property that when used as a reference your perceptions will actually converge there (stability).
For all that people talk of agents and agentiness, their conceptions are often curiously devoid of agency, with “agents” merely predicting outcomes and (to them) magically finding themselves converging there, unaware that they are taking any actions to steer the future where they want it to go. But what brings the perception towards the goal is not the goal, but the way that the actions depend on the difference.
From the point of view of the thermostat, the set temperature is its destiny to which it is inexorably being pulled. In other words it is the only goal it can possibly hope to pursue.
If we are to imagine the thermostat conscious, that we surely cannot limit that consciousness to only the perception and the reference, but also allow it to see, intend, and perform its own actions. It is not inexorably being pulled, but itself pushing (by turning the heat on and off) towards its goal.
Some of the heaters in my house do not have thermostats, in which case I’m the thermostat. I turn the heater on when I find the room too cold and turn it off when it’s too warm. This is exactly what a thermostat would be doing, except that it can’t think about it.
For all that people talk of agents and agentiness, their conceptions are often curiously devoid of agency, with “agents” merely predicting outcomes and (to them) magically finding themselves converging there, unaware that they are taking any actions to steer the future where they want it to go. But what brings the perception towards the goal is not the goal, but the way that the actions depend on the difference.
So does the delta between goal and perception cause the action directly? Or does it require “you” to become aware of that delta and then chose the corresponding action?
If I understand correctly you are arguing for the latter it which case this seems like homunculus fallacy. How does “you” decide what actions to pick?
If we are to imagine the thermostat conscious, that we surely cannot limit that consciousness to only the perception and the reference, but also allow it to see, intend, and perform its own actions. It is not inexorably being pulled, but itself pushing (by turning the heat on and off) towards its goal.
Only if we want to commit ourselves to a homunculus theory of consciousness and a libertarian theory of free will.
If we are to imagine the thermostat conscious, that we surely cannot limit that consciousness to only the perception and the reference, but also allow it to see, intend, and perform its own actions. It is not inexorably being pulled, but itself pushing (by turning the heat on and off) towards its goal.
Only if we want to commit ourselves to a homunculus theory of consciousness and a libertarian theory of free will.
You introduced the homunculus by imagining the thermostat conscious. I responded by pointing out that if it’s going to be aware of its perception and reference, there is no reason to exclude the rest of the show.
But of course the thermostat is not conscious.
I am. When I act as the thermostat, I am perceiving the temperature of the room, and the temperature I want it to be, and I decide and act to turn the heat on or off accordingly. There is no homunculus here, nor “libertarian free will”, whatever that is, just a description of my conscious experience and actions. To dismiss this as a homunculus theory is to dismiss the very idea of consciousness.
And some people do that. Do you? They assert that there is no such thing as consciousness, or a mind, or subjective experience. These are not even illusions, for that would imply an experiencer of the illusion, and there is no experience. For such people, all talk of these things is simply a mistake. If you are one of these people, then I don’t think the conversation can proceed any further. From my point of view you would be a blind man denying the existence and even the idea of sight.
Or perhaps you grant consciousness the ability to observe, but not to do? In imagination you grant the thermostat the ability to perceive, but not to do, supposing that the latter would require the nonexistent magic called “libertarian free will”. But epiphenomenal consciousness is as incoherent a notion as p-zombies. How can a thing exist that has no effect on any physical object, yet we talk about it (which is a physical action)?
I’m just guessing at your views here.
So does the delta between goal and perception cause the action directly?
For the thermostat (assuming the bimetallic strip type), the reference is the position of a pair of contacts either side of the strip, the temperature causes the curvature of the strip, which makes or breaks the contacts, which turns the heating on or off. This is all physically well understood. There is nothing problematic here.
For me acting as the thermostat, I perceive the delta, and act accordingly. I don’t see anything problematic here either. The sage is not above causation, nor subject to causation, but one with causation. As are we all, whether we are sages or not.
A postscript on the Hard Problem.
In the background there is the Hard Problem of Consciousness, which no-one has a solution for, nor has even yet imagined what a solution could possibly look like. But all too often people respond to this enigma by arguing, only magic could cross the divide, magic does not exist, therefore consciousness does not exist. But the limits of what I understand are not the limits of the world.
I don’t think thermostat consciousness would require homunculi any more than human consciousness does but I think it was a mistake on my part to use the word consciousness as it inevitably complicates things rather than simplifying them (although FWIW I do agree that consciousness exists and is not an epiphenomenon).
For the thermostat (assuming the bimetallic strip type), the reference is the position of a pair of contacts either side of the strip, the temperature causes the curvature of the strip, which makes or breaks the contacts, which turns the heating on or off. This is all physically well understood. There is nothing problematic here.
For me acting as the thermostat, I perceive the delta, and act accordingly. I don’t see anything problematic here either. The sage is not above causation, nor subject to causation, but one with causation. As are we all, whether we are sages or not.
The thermostat too is one with causation. The thermostat acts in exactly the same way as you do. I is possibly even already conscious (I had completely forgotten this was an established debate and its absolutely not a crux for me). You are much more complex that a thermostat.
I think there is something a bit misleading about your example of a person regulating temperature in their house manually. The fact that you can consciously implement the control algorithm does not tell us anything about your cognition or even your decision making process since you can also implement pretty much any other algorithm (you are more or less turing complete subject to finiteness etc.). PCT is a theory of cognition, not simply of decision making.
The thermostat acts in exactly the same way as you do. I is possibly even already conscious (I had completely forgotten this was an established debate and its absolutely not a crux for me). You are much more complex that a thermostat.
I don’t think there is any possibility of a thermostat being conscious. The linked article makes the common error of arguing that wherever there is consciousness we see some phenomenon X, therefore wherever there is X there is consciousness, and if there doesn’t seem to be any, htere muste be consciousness “in a sense”.
The fact that you can consciously implement the control algorithm does not tell us anything about your cognition
Of course. The thermostat controls temperature without being conscious; I can by my own conscious actions also choose to perform the thermostat’s role.
Anyway, all this began with my objecting to “agents” performing time travel, and arguing that whether an unconscious thermostat or a conscious entity such as myself controls the temperature, no time travel is involved. Neither do I achieve a goal merely by predicting that it will be achieved, but by acting to achieve it. Are we disagreeing about anything at this point?
Feedback loops are not retrocausal. When you turn on a thermostat, then some time thereafter, the room will be at about the temperature it says on the dial. That future temperature is not causing itself, what is causing it is the thermostat’s present sensing of the difference between the reference temperature and the actual temperature, and the consequent turning on of the heat source. If there’s a window wide open and it’s very cold outside, the heat source may not be powerful enough to bring the room up to the reference temperature, and that temperature will not be reached. Can this supposed Tardis be defeated just by opening a window?
The consequence does not cause the action. It does not even behave as if it causes the action. Here the consequence varies according to the state of the window, while the action (keep the heating on while the temperature is below the reference) is the same, regardless of whether it is going to succeed.
For agents that think about and make predictions about the future (as the thermostat does not), what causes the agent’s actions is its present ideas about those consequences. Those present ideas are not obtained from the future, but from the agent’s present knowledge. Nothing comes from the future. If there is the equivalent of an open window, frustrating the agent’s plans, and the agent does not know of it, then they will execute their plans and the predicted consequence will not happen. The philosopher Robert Burns wrote a well-known essay on this point.
Yes.
No. The thermostat has no knowledge of its eventual success or failure. An agent may do its best to predict the outcome of its plans, but is also not in communication with the future. How much easier everything would be, if we could literally see the future consequences of our present actions, instead of guessing as best we can from present knowledge! But things do not work that way.
The Nick Land article you linked describes him as telling “theory fiction”, a term I take as parallel to “science fiction”. That is, invent stuff and ask, what if? (The “what if” part is what distinguishes it from mundane fiction.) But if the departure point from reality is too great a rupture, all you can get from it is an entertaining story, not something of any relevance to the real world. “1984″ was a warning; the Lensman novels were inspirational entertainment; the Cthulhu mythos is pure entertainment.
There’s nothing wrong with entertainment. But it is fictional evidence only, even when presented in the outward form of a philosophical treatise instead of a novel.
ETA: I see that two people already made the same point commenting on the linked Garrabrant article, but they did not receive a response. In the same place, I think this also touches on the same problem.
This comment looks to me like you’re missing the main insight of finite factored sets. Suggest reading https://www.lesswrong.com/posts/PfcQguFpT8CDHcozj/finite-factored-sets-in-pictures-6 and some of the other posts, maybe https://www.lesswrong.com/posts/N5Jm6Nj4HkNKySA5Z/finite-factored-sets and https://www.lesswrong.com/posts/qhsELHzAHFebRJE59/a-greater-than-b-greater-than-a until it makes sense why a bunch of clearly competent people thought this was an important contribution.
One of the comments you linked has an edit showing they updated towards this position.
This is a non-trivial insight and reframe, and I’m not going to try and write a better explanation than Scott and Magdalena. But, if you take the time to get it and respond with clear understanding of the frame I’m open to taking a shot at answering stuff.
I don’t believe you have something to gain by insisting on using the word “time” in a technical jargon sense—or do you mean something different than “if self-fulfilling prophecies can be seen as choosing one of imagined scenarios, and you imagine there are agents in those scenarios, you can also imagine as if those future agents will-have-influenced your decision today, as if they acted retro-causally”? Is there a need for an actual non-physical philosophy that is not just a metaphor?
There’s a non trivial conceptual clarification / deconfusion gained by FFS on top of the summary you made there. I put decent odds on this clarification being necessary for some approaches to strongly scalable technical alignment.
(a strong opinion held weakly, not a rigorous attempt to refute anything, just to illustrate my stance)
TypeError: obviously, any correct data structure for this shape of the problem must be approximating an infinite set (Bayesian), thus must be implemented lazy/generative, thus must be learnable, thus must be redundant and cannot possibly be factored ¯\_(ツ)_/¯
also, strong alignment is impossible and under the observation that we live in the least dignified world, so doom will be forward-caused by someone who thinks alignment is possible and makes a mistake:
As an agent when you think about the aspects of the future that you yourself may be able to influence, your predictions have to factor in what actions you will take. But your choices about what actions to take will in turn be influenced by these predictions.
To achieve an equilibrium you are restricted to predicting a future such that your predicting it will not cause you to also predict yourself taking actions that will prevent it (otherwise you would have to change predictions).
In other words you must predict a future such that your predicting it also causes you to predict that you will make it happen, Such as future is an ‘attractor’.
Of course you might say that from an external/objective point of view it is the conception of possible futures that is acting with causal force, not the actual possible futures. But you are an embedded agent, you can only observe from the inside, and your conception of the future is the only sense in which you can conceive of it.
as a sidenote:
Theory fiction is a pretty fuzzy term, but this is definitely an unfair characterisation. It’s something more like a piece of fiction that makes a contribution to (philosophical/political) theory.
Do you have an example for that? It seems to me you’re describing circular process, in which you’d naturally look for stable equilibria. Basically prediction will influence action, action will influence prediction, something like that. But I don’t quite get it how the circle works.
Say I’m the agent faced with a decision. I have some options, I think through the possible consequences of each, and I choose the option that leads to the best outcome according to some metric. I feel it would be fair to say that the predictions I’m making about the future determine which choice I’ll make.
What I don’t see is how the choice I end up making influences my prediction about the future. From my perspective the first step is predicting all possible futures and the second step execution the action that leads to the best future. Whatever option I end up selecting, it was already reasoned through beforehand, as were all the other options I ended up not selecting. Where’s the feedback loop?
.I will try to give an example (but let me also say that I am just learning this stuff myself so take it with a pinch of salt).
Lets say that I throw a ball at you.
You have to answer two questions:
what is going to happen?
what am I going to do?
Observe that the answers to each of these questions affects each other...
Your answer to question 1 affects your answer to question 2 because you have preferences that you act on, for example if your prediction is that the ball is going to hit your face, then you will pick an action that prevents that such as raising your hand to catch it. (adding this at the end: if I had picked a better example the outcome could also have a much more direct effect on you independent of preference e.g. if the ball is heavy and moving fast its clear that you wont be able to just stand there when it hits you)
Your answer to question to 2 affects your answer to question 1, because your actions have causal power. For example if you raise your hand, the ball won’t hit your face.
So ultimately your tuple of answers to questions 1 and 2 should be compatible with each other, in the sense that your answer to 1 does not make you do something other than 2, and your answer to 2 does not cause an outcome other than 1.
For example you can predict that you will catch the ball and act to catch it.
Of course you are right that we could model this situation differently: you have some possible actions with predicted consequences and expected utilities and you pick the one that maximise expected utility etc.
So why might we prefer the prior model? I don’t have a fully formed answer for you here (ultimately it will come den to which one is more useful/predictive/etc.) but a few tentative reasons might be:
it better corresponds to Embedded Agency
It fits with an active inference model of cognition
It better fits with our experience of making decisions (especially fast ones such as the example)
I don’t understand this example. If someone throws a ball to me, I can try to catch it, try to dodge it, let it go whizzing over my head, and so on, and I will have some idea of how things will develop depending on my choices, but you seem to be adding an unnecessary extra level onto this that I don’t follow.
I am claiming (weakly) that the actual process looks less like:
enumerate possible actions
predict their respective outcomes
choose the best
and more like
come up with possible actions and outcomes in an ad hoc way
back and forward chain from them until a pair meet in the middle
I was going to write a whole argument about how the kind of decision theoretical procedure you are describing is something you can choose to do at the conscious level, but not something you actually cognitively do by default but then I saw you basically already wrote the same argument here.
Consider the ball scenario from the perspective of perceptual control theory (or active inference):
When you first see the ball your baseline is probably just something like not getting hit.
But on its own this does not really give any signal for how to act, so you need to refine your baseline to something more specific. What baseline will you pick? Out of the space of possible futures in which you don’t get hit by the ball there are many choices available to you:
You could pick one in which the wind blows the ball to the side, but you cant control that so it wont help much.
You could pick a future that you don’t actually have the motor skills to do, such as leaping into the air an kung-fu kicking the ball away. You start jumping but then you realise you dont know kung-fu, and the ball hits you in the balls!
You could pick a future in which you catch the ball, and do so (or you could still fail).
All of this does not happen discretely but over time. The ball is approaching. You are starting to move based on some your current baseline or some average you are still considering. As this goes on the space of possible futures is being changed by your actions and by the ball’s approach. Maybe its too late to raise you hand? Maybe its too late to duck? Maybe there’s still time to flinch?
All of this is to say that to successfully do something deliberately, your goal must have the property that when used as a reference your perceptions will actually converge there (stability).
Lets look go back to your example of the thermostat:
From you perspective as an outsider there is a clear forward causal series of events. But how should the thermostat itself (to which I am magically granting the gift of consciousness) think about future?
From the point of view of the thermostat, the set temperature is its destiny to which it is inexorably being pulled. In other words it is the only goal it can possibly hope to pursue.
Of course as outsiders we know we can open the window and deny the thermostat this future. But the thermostat itself knows nothing of windows, they are outside of its world model and outside of its control.
For all that people talk of agents and agentiness, their conceptions are often curiously devoid of agency, with “agents” merely predicting outcomes and (to them) magically finding themselves converging there, unaware that they are taking any actions to steer the future where they want it to go. But what brings the perception towards the goal is not the goal, but the way that the actions depend on the difference.
If we are to imagine the thermostat conscious, that we surely cannot limit that consciousness to only the perception and the reference, but also allow it to see, intend, and perform its own actions. It is not inexorably being pulled, but itself pushing (by turning the heat on and off) towards its goal.
Some of the heaters in my house do not have thermostats, in which case I’m the thermostat. I turn the heater on when I find the room too cold and turn it off when it’s too warm. This is exactly what a thermostat would be doing, except that it can’t think about it.
So does the delta between goal and perception cause the action directly?
Or does it require “you” to become aware of that delta and then chose the corresponding action?
If I understand correctly you are arguing for the latter it which case this seems like homunculus fallacy. How does “you” decide what actions to pick?
Only if we want to commit ourselves to a homunculus theory of consciousness and a libertarian theory of free will.
(Also a reply to your parallel comment.)
You introduced the homunculus by imagining the thermostat conscious. I responded by pointing out that if it’s going to be aware of its perception and reference, there is no reason to exclude the rest of the show.
But of course the thermostat is not conscious.
I am. When I act as the thermostat, I am perceiving the temperature of the room, and the temperature I want it to be, and I decide and act to turn the heat on or off accordingly. There is no homunculus here, nor “libertarian free will”, whatever that is, just a description of my conscious experience and actions. To dismiss this as a homunculus theory is to dismiss the very idea of consciousness.
And some people do that. Do you? They assert that there is no such thing as consciousness, or a mind, or subjective experience. These are not even illusions, for that would imply an experiencer of the illusion, and there is no experience. For such people, all talk of these things is simply a mistake. If you are one of these people, then I don’t think the conversation can proceed any further. From my point of view you would be a blind man denying the existence and even the idea of sight.
Or perhaps you grant consciousness the ability to observe, but not to do? In imagination you grant the thermostat the ability to perceive, but not to do, supposing that the latter would require the nonexistent magic called “libertarian free will”. But epiphenomenal consciousness is as incoherent a notion as p-zombies. How can a thing exist that has no effect on any physical object, yet we talk about it (which is a physical action)?
I’m just guessing at your views here.
For the thermostat (assuming the bimetallic strip type), the reference is the position of a pair of contacts either side of the strip, the temperature causes the curvature of the strip, which makes or breaks the contacts, which turns the heating on or off. This is all physically well understood. There is nothing problematic here.
For me acting as the thermostat, I perceive the delta, and act accordingly. I don’t see anything problematic here either. The sage is not above causation, nor subject to causation, but one with causation. As are we all, whether we are sages or not.
A postscript on the Hard Problem.
In the background there is the Hard Problem of Consciousness, which no-one has a solution for, nor has even yet imagined what a solution could possibly look like. But all too often people respond to this enigma by arguing, only magic could cross the divide, magic does not exist, therefore consciousness does not exist. But the limits of what I understand are not the limits of the world.
I don’t think thermostat consciousness would require homunculi any more than human consciousness does but I think it was a mistake on my part to use the word consciousness as it inevitably complicates things rather than simplifying them (although FWIW I do agree that consciousness exists and is not an epiphenomenon).
The thermostat too is one with causation. The thermostat acts in exactly the same way as you do. I is possibly even already conscious (I had completely forgotten this was an established debate and its absolutely not a crux for me). You are much more complex that a thermostat.
I think there is something a bit misleading about your example of a person regulating temperature in their house manually. The fact that you can consciously implement the control algorithm does not tell us anything about your cognition or even your decision making process since you can also implement pretty much any other algorithm (you are more or less turing complete subject to finiteness etc.). PCT is a theory of cognition, not simply of decision making.
I don’t think there is any possibility of a thermostat being conscious. The linked article makes the common error of arguing that wherever there is consciousness we see some phenomenon X, therefore wherever there is X there is consciousness, and if there doesn’t seem to be any, htere muste be consciousness “in a sense”.
Of course. The thermostat controls temperature without being conscious; I can by my own conscious actions also choose to perform the thermostat’s role.
Anyway, all this began with my objecting to “agents” performing time travel, and arguing that whether an unconscious thermostat or a conscious entity such as myself controls the temperature, no time travel is involved. Neither do I achieve a goal merely by predicting that it will be achieved, but by acting to achieve it. Are we disagreeing about anything at this point?