This is the intuition behind claims like Viktor Frankl’s: “Between stimulus and response there is a space. In that space is our power to choose a response. In our response lies our growth and our freedom.” If you always respond automatically to a given stimulus, you have only one choice, and that makes you unfree in the sense of “degrees of freedom.” [...]
Buddhists also often speak of freedom more literally in terms of indifference, and there’s a very straightforward logic to this; you can only choose equally between A and B if you have been “liberated” from the attractions and aversions that constrain you to choose A over B. Those who insist that Buddhism is compatible with a fairly normal life say that after Buddhist practice you still will choose systematically most of the time — your utility function cannot fully flatten if you act like a living organism — but that, like Viktor Frankl’s ideal human, you will be able to reflect with equinamity and consider choosing B over A; you will be more “mentally flexible.”
I would frame the Frankl/Buddhist thing somewhat differently. While I agree with your characterization, I think that the F/B thing is also compatible with freedom as optimization.
Looking at your descriptions through the frame of subagents, the automatic response / constraint thing is describing something like the thing that I discussed on my post on coherence in humans: a situation where a single subagent or a coalition of them seizes control in order to force a particular reaction, without the rest of the mind-system having a chance to properly evaluate the need to do so. The extreme case is a series of forced moves by protector subagents which see those as the only possible actions, in a way which is poorly adapted to the person’s current environment and only keeps getting them into a worse and worse mess.
Many Buddhist techniques, as well as contemporary therapy methods as well as various CFAR techniques, are IMO aimed at disassembling these kinds of forced moves, by enabling more subagents in the mind-system to be brought in to evaluate the action before taking it, rather than going by the judgment of just one subagent.
But… subjectively and theoretically, this feels like it’s moving things towards optimization and indifference. When all subagents get to participate in the decision and evaluate the relevant information, they will agree on the optimal decision in light of their current state of knowledge, and then there is nothing to do except to execute that optimal decision. They are unconstrained and indifferent in the sense of being able to consider any action during their decision-making process, but optimizing in the sense of eventually converging on a single clearly optimal decision.
To make this more concrete, here’s an example from the book Rethinking Positive Thinking, discussing this kind of an integration process:
Try this exercise for yourself. Think about a fear you have about the future that is vexing you quite a bit and that you know is unjustified. Summarize your fear in three to four words. For instance, suppose you’re a father who has gotten divorced and you share custody with your ex-wife, who has gotten remarried. For the sake of your daughter’s happiness, you want to become friendly with her stepfather, but you find yourself stymied by your own emotions. Your fear might be “My daughter will become less attached to me and more attached to her stepfather.” Now go on to imagine the worst possible outcome. In this case, it might be “I feel distanced from my daughter. When I see her she ignores me, but she eagerly spends time with her stepfather.” Okay, now think of the positive reality that stands in the way of this fear coming true. What in your actual life suggests that your fear won’t really come to pass? What’s the single key element? In this case, it might be “The fact that my daughter is extremely attached to me and loves me, and it’s obvious to anyone around us.” Close your eyes and elaborate on this reality.
Now take a step back. Did the exercise help? I think you’ll find that by being reminded of the positive reality standing in the way, you will be less transfixed by the anxious fantasy. When I conducted this kind of mental contrasting with people in Germany, they reported that the experience was soothing, akin to taking a warm bath or getting a massage. “It just made me feel so much calmer and more secure,” one woman told me. “I sense that I am more grounded and focused.”
Mental contrasting can produce results with both unjustified fears as well as overblown fears rooted in a kernel of truth. If as a child you suffered through a couple of painful visits to the dentist, you might today fear going to get a filling replaced, and this fear might become so terrorizing that you put off taking care of your dental needs until you just cannot avoid it. Mental contrasting will help you in this case to approach the task of going to the dentist. But if your fear is justified, then mental contrasting will confirm this, since there is nothing preventing your fear from coming true. The exercise will then help you to take preventive measures or avoid the impending danger altogether.
Before doing mental contrasting, your actions might feel forced in a way which is constraining: you have a fear of something, and that fear is forcing you to act in ways which feel like they are against your better judgment. That is, some subagents feel like the fear is correct, while others feel that it’s unjustified. When you do mental contrasting by finding a reassuring mental image, you are taking the point of view of some subagents (the ones that think “this fear is unjustified”) and translating it into a language which the other subagents (that think that the fear is justified) can understand. By integrating information across them, they may come into agreement that there is nothing that needs to be done. Alternatively, failing to find any counter-evidence to the fear, may convince the subagents that were trying to dismiss the fear that they were mistaken, and then you find yourself being compelled to take some kind of action.
You might notice that while this is optimizing, it feels very different from some of your descriptions of optimizing, such as this paragraph:
Another issue with freedom-as-optimization is that it’s compatible with quite tightly constrained behavior, in a way that’s not consistent with our primitive intuitions about freedom. If you’re only “free” to do the optimal thing, that can mean you are free to do only one thing, all the time, as rigidly as a machine. If, for instance, you are only free to “act in your own best interests”, you don’t have the option to act against your best interests. People in real life can feel constrained by following a rigid algorithm even when they agree it’s “best”; “but what if I want to do something that’s not best?” Or, they can acknowledge they’re free to do what they choose, but are dismayed to learn that their choices are “dictated” as rigidly by habit and conditioning as they might have been by some human dictator.
Optimized-behavior-as-produced-by-mental-contrasting does not feel like this. If you were afraid that your daughter might abandon you, and then became convinced that your daughter is always going to deeply love you and that it would be silly to waste time worrying about it, then your mind-system has decided that the optimal thing to do is to stop worrying. That does not feel like you are following a rigid algorithm which your choices are dictated by: it just feels like doing anything else would be pointless and silly. You are in a sense constrained by the underlying optimization process—focusing your time and effort on this issue would feel pointless, so you don’t do it - but it also feels like you are just following your own judgment of what makes sense. The algorithm feels like it could take actions intended to win the daughter’s love back, but that it doesn’t want to—which, looking from the outside, means that the algorithm can’t.
And while you were doing the mental contrasting you were indifferent in a sense: had the contrasting produced the opposite the result, you would now feel that it’s important to do something to ensure that your daughter does continue to love you, and not doing anything about it would feel horrifying and impossible. So that too would have locked you into one optimal approach.
Under this interpretation, the Frankl/Buddhist notion of indifference is pointing to the fact that your evaluation process was indifferent. It was free to settle on either end result, rather than being dominated by a single subagent with such a strong fear of abandonment that it was unwilling to fairly evaluate the relevant evidence. But once the evaluation has concluded, you are optimizing; and in a sense you were actually never free to decide, since your mind-system as a whole is optimizing some implicit utility function. The eventual end result of your evaluation was already dictated by the information that was present in your mind-system, but had not yet been integrated.
I believe that the thing which you are describing, where people feel constrained by following a rigid algorithm even when they agree that it’s best, is a situation where such subagent agreement is lacking. Some subagents have become convinced that following the algorithm is the best course of action; other subagents feel like it is not taking into account all information, or that it is walking over some of your needs, and they are communicating their disagreement as a feeling of being constrained or externally dictated. (In particular, your description sounds like one is optimizing by trying to execute a fully legible algorithm, as opposed to optimizing by also listening to their intuitive feelings.)
this feels like it’s moving things towards optimization and indifference.
I came here to say something like “I feel like this post sets up a false dichotomy” and I think you’ve done a better job than I would have at explicating why it feels to me optimization and indifference go together and are not really in opposition, except from within the prisons of our own minds thinking that they are in opposition.
I would frame the Frankl/Buddhist thing somewhat differently. While I agree with your characterization, I think that the F/B thing is also compatible with freedom as optimization.
Looking at your descriptions through the frame of subagents, the automatic response / constraint thing is describing something like the thing that I discussed on my post on coherence in humans: a situation where a single subagent or a coalition of them seizes control in order to force a particular reaction, without the rest of the mind-system having a chance to properly evaluate the need to do so. The extreme case is a series of forced moves by protector subagents which see those as the only possible actions, in a way which is poorly adapted to the person’s current environment and only keeps getting them into a worse and worse mess.
Many Buddhist techniques, as well as contemporary therapy methods as well as various CFAR techniques, are IMO aimed at disassembling these kinds of forced moves, by enabling more subagents in the mind-system to be brought in to evaluate the action before taking it, rather than going by the judgment of just one subagent.
But… subjectively and theoretically, this feels like it’s moving things towards optimization and indifference. When all subagents get to participate in the decision and evaluate the relevant information, they will agree on the optimal decision in light of their current state of knowledge, and then there is nothing to do except to execute that optimal decision. They are unconstrained and indifferent in the sense of being able to consider any action during their decision-making process, but optimizing in the sense of eventually converging on a single clearly optimal decision.
To make this more concrete, here’s an example from the book Rethinking Positive Thinking, discussing this kind of an integration process:
Before doing mental contrasting, your actions might feel forced in a way which is constraining: you have a fear of something, and that fear is forcing you to act in ways which feel like they are against your better judgment. That is, some subagents feel like the fear is correct, while others feel that it’s unjustified. When you do mental contrasting by finding a reassuring mental image, you are taking the point of view of some subagents (the ones that think “this fear is unjustified”) and translating it into a language which the other subagents (that think that the fear is justified) can understand. By integrating information across them, they may come into agreement that there is nothing that needs to be done. Alternatively, failing to find any counter-evidence to the fear, may convince the subagents that were trying to dismiss the fear that they were mistaken, and then you find yourself being compelled to take some kind of action.
You might notice that while this is optimizing, it feels very different from some of your descriptions of optimizing, such as this paragraph:
Optimized-behavior-as-produced-by-mental-contrasting does not feel like this. If you were afraid that your daughter might abandon you, and then became convinced that your daughter is always going to deeply love you and that it would be silly to waste time worrying about it, then your mind-system has decided that the optimal thing to do is to stop worrying. That does not feel like you are following a rigid algorithm which your choices are dictated by: it just feels like doing anything else would be pointless and silly. You are in a sense constrained by the underlying optimization process—focusing your time and effort on this issue would feel pointless, so you don’t do it - but it also feels like you are just following your own judgment of what makes sense. The algorithm feels like it could take actions intended to win the daughter’s love back, but that it doesn’t want to—which, looking from the outside, means that the algorithm can’t.
And while you were doing the mental contrasting you were indifferent in a sense: had the contrasting produced the opposite the result, you would now feel that it’s important to do something to ensure that your daughter does continue to love you, and not doing anything about it would feel horrifying and impossible. So that too would have locked you into one optimal approach.
Under this interpretation, the Frankl/Buddhist notion of indifference is pointing to the fact that your evaluation process was indifferent. It was free to settle on either end result, rather than being dominated by a single subagent with such a strong fear of abandonment that it was unwilling to fairly evaluate the relevant evidence. But once the evaluation has concluded, you are optimizing; and in a sense you were actually never free to decide, since your mind-system as a whole is optimizing some implicit utility function. The eventual end result of your evaluation was already dictated by the information that was present in your mind-system, but had not yet been integrated.
I believe that the thing which you are describing, where people feel constrained by following a rigid algorithm even when they agree that it’s best, is a situation where such subagent agreement is lacking. Some subagents have become convinced that following the algorithm is the best course of action; other subagents feel like it is not taking into account all information, or that it is walking over some of your needs, and they are communicating their disagreement as a feeling of being constrained or externally dictated. (In particular, your description sounds like one is optimizing by trying to execute a fully legible algorithm, as opposed to optimizing by also listening to their intuitive feelings.)
I came here to say something like “I feel like this post sets up a false dichotomy” and I think you’ve done a better job than I would have at explicating why it feels to me optimization and indifference go together and are not really in opposition, except from within the prisons of our own minds thinking that they are in opposition.