That may usually be the case, but this is not a law. Certain people could conceivably precommit to being reflectively consistent, to follow the results of calculations whenever the calculations are available.
Certain people could conceivably precommit to being reflectively consistent, to follow the results of calculations whenever the calculations are available.
Of course they could. And they would not get as good results from either an experiential or practical perspective as the person who explicitly committed to actual, concrete results, for the reasons previously explained.
The brain makes happen what you decide to have happen, at the level of abstraction you specify. If you decide in the abstract to be a good person, you will only be a good person in the abstract.
In the same way, if you “precommit to reflective consistency”, then reflective consistency is all that you will get.
It is more useful to commit to obtaining specific, concrete, desired results, since you will then obtain specific, concrete assistance from your brain for achieving those results, rather than merely abstract, general assistance.
Edit to add: In particular, note that a precommitment to reflective consistency does not rule out the possibility of one’s exercising selective attention and rationalization as to which calculations to perform or observe. This sort of “commit to being a certain kind of person” thing tends to produce hypocrisy in practice, when used in the abstract. So much so, in fact, that it seems to be an “intentionally” evolved mechanism for self-deception and hypocrisy. (Which is why I consider it a particularly heinous form of error to try to use it to escape the need for concrete commitments—the only thing I know of that saves one from hypocrisy!)
A person who decides to be “a good person” will selectively perceive those acts that make them a “good person”, and largely fail to perceive those that do not, regardless of the proportions of these events, or whether these events are actually good in their effects. They will also be more likely perceive to be good, anything that they already want to do or which benefits them, and they will find ways to consider it a higher good to refrain from doing anything they’d rather not do in the first place.
Similarly, a person who decides to be “reflectively consistent” will not only selectively perceive their acts of reflective consistency, they will also fail to observe the lopsided way in which they apply the concept, nor will they notice how their “reflective consistency” is not, in itself, achieving any other results or benefits for themselves or others.
Brains operate on the level of abstraction you give them, so the more abstract the goal, the less connected to reality the results will be, and the more wiggle room there will be for motivated reasoning and selective perception.
So in theory you can precommit to reflective consistency, but in practice you will only get an illusion of reflective consistency.
(Edit to add: If you’re still confused by this, it’s probably because you’re thinking about thinking, and I’m talking about actual behavior.)
I can’t speak for Vladimir, but from my perspective, this is much clearer now. Thanks!
(ETA: FWIW, while most of your comments on this post leave me with a sense that you have useful information to share, I’ve also found them somewhat frustrating, in that I really struggle to figure out exactly what it is. I don’t know if this is your writing style, my slow-wittedness, or just the fact that there’s a lot of inferential distance between us; but I just thought it might be useful for you to know.)
FWIW, while most of your comments on this post leave me with a sense that you have useful information to share, I’ve also found them somewhat frustrating, in that I really struggle to figure out exactly what it is.
Since I’m trying to rapidly summarize a segment of what Robert Fritz took a couple of books to get across to me (“The Path of Least Resistance” and “Creating”), inferential distance is likely a factor.
It’s mostly his model of decisionmaking and commitment that I’m describing, with a few added twists of mine regarding the ranking bit, and the “worst that could happen” part, as well as links from it to the System 1⁄2 model. (And of course I’ve been talking about Fritz’s idea of the ideal-belief-reality-conflict in other threads, and that relates here as well.)
You: People can’t be reflectively consistent. Me: Yes they can, sometimes. You: Of course they can. Me: I’m confused. You: Of course people can be reflectively consistent. But only in the dreamland. If you are still confused, it’s probably because you are still thinking about the dreamland, while I’m talking about reality.
I think pjeby’s point was that reflective consistency is a way of thinking—so if you commit to thinking in a reflectively consistent way, you will think in that way when you think, but you may still wind up not acting according to that kind of thoughts every time you would want to, because you’re not entirely likely to notice that you need to think them in the first place.
Reflective consistency is not about a way of thinking. Decision theory, considered in the simplest case, talks about properties of actions, including future actions, while ignoring properties of the algorithm generating the actions.
Basically, our conversation went like this:
You: People can’t be reflectively consistent.
Me: Yes they can, sometimes.
You: Of course they can.
Me: I’m confused.
No, it went like this:
Me: People can't be reflectively consistent
You: But they can precommit to be
Me: But that won't *actually make them so*
You: But they could precommit to acting as if they were
Me: Of course they can, but it still won't actually make them so.
See also Abraham Lincoln’s, “If you call a tail a leg, how many legs does a dog have? Four, because calling a tail a leg doesn’t make it so.”
See also Abraham Lincoln’s, “If you call a tail a leg, how many legs does a dog have? Four, because calling a tail a leg doesn’t make it so.”
This is a diversion, but this has always struck me as a stupid answer to an even stupider question. I don’t really understand why people think it’s supposed to reveal some deep wisdom.
This is a diversion, but this has always struck me as a stupid answer to an even stupider question. I don’t really understand why people think it’s supposed to reveal some deep wisdom.
That’s Zen for you. ;-)
Seriously, the point (for me, anyhow) is that System 2 thinking routinely tries to call a tail a leg, and I think there’s a strong argument to be made that it’s an important part of what system 2 reasoning “evolved for”.
Huh? Reflective consistency is a property of behavior. If you behave as if you are reflectively consistent, you are.
And I am saying that a single precommitment to behaving in a reflectively consistent way, will not result in you actually behaving in the same way as you would if you individually committed to all of the specific decisions recommended by your abstract decision theory. Your perceptions and motivation will differ, and therefore your actual actions will differ.
People try to precommit in this fashion all the time, by adopting time management or organizational systems that purport to provide them with a consistent decision theory over some subdomain of decisions. They hope to then simply commit to that system, and thereby somehow escape the need for making (and committing to) the individual decisions. This doesn’t usually work very well, for reasons that have nothing to do with which decision theory they are attempting to adopt.
In my original comment, I specified that I only consider the situations “where the calculations are available”, that is you know (theoretically!) exactly what to do to be reflectively consistent in such situations and don’t need to achieve great artistic feats to pull that off.
You need to qualify what you are asserting, otherwise everything looks gray.
I’m asserting that people don’t actually do what they “decide” to do on the abstract level of System 2, unless certain System 1 processes are engaged with respect to the concrete, “near” aspects of the situation where the behavior is to be executed, and that merely precommitting to follow a certain decision theory is not a substitute for the actual, concrete, System 1commitment processes involved.
Now, could you commit to following a certain behavior under certain circumstances, that included the steps needed to also obtain System 1 commitment for the decision?
That I do not know. I think maybe you could. It would depend, I think, on how concretely you could define the circumstances when these steps would be taken… and doing that in a way that was both concrete and comprehensive would likely be difficult, which is why I’m not so sure about its feasibility.
Your model of human behavior doesn’t look in the least realistic to me, with its prohibition of reason, and requirements for difficult rituals of baptising reason into action.
Your model of human behavior doesn’t look in the least realistic to me, with its prohibition of reason, and requirements for difficult rituals of baptising reason into action.
Well, I suppose all the experiments that have been done on construal theory, and how concrete vs. abstract construal affects action and procrastination must be unrealistic, too, since that is a major piece of what I’m talking about here.
(If people were generally good at turning their reasoning into action, akrasia wouldn’t be such a hot topic here and in the rest of the world.)
Akrasia happens, but it’s not a universal mode. I object to you implying that akrasia is inevitable.
I never said it was inevitable. I said it happens when there are conflicts, and you haven’t really decided what to do about those conflicts, with enough detail and specificity for System 1 to automatically make the “right” choice in context. If you want different results, it’s up to you to specify them for yourself.
That may usually be the case, but this is not a law. Certain people could conceivably precommit to being reflectively consistent, to follow the results of calculations whenever the calculations are available.
Of course they could. And they would not get as good results from either an experiential or practical perspective as the person who explicitly committed to actual, concrete results, for the reasons previously explained.
The brain makes happen what you decide to have happen, at the level of abstraction you specify. If you decide in the abstract to be a good person, you will only be a good person in the abstract.
In the same way, if you “precommit to reflective consistency”, then reflective consistency is all that you will get.
It is more useful to commit to obtaining specific, concrete, desired results, since you will then obtain specific, concrete assistance from your brain for achieving those results, rather than merely abstract, general assistance.
Edit to add: In particular, note that a precommitment to reflective consistency does not rule out the possibility of one’s exercising selective attention and rationalization as to which calculations to perform or observe. This sort of “commit to being a certain kind of person” thing tends to produce hypocrisy in practice, when used in the abstract. So much so, in fact, that it seems to be an “intentionally” evolved mechanism for self-deception and hypocrisy. (Which is why I consider it a particularly heinous form of error to try to use it to escape the need for concrete commitments—the only thing I know of that saves one from hypocrisy!)
I can’t understand you.
A person who decides to be “a good person” will selectively perceive those acts that make them a “good person”, and largely fail to perceive those that do not, regardless of the proportions of these events, or whether these events are actually good in their effects. They will also be more likely perceive to be good, anything that they already want to do or which benefits them, and they will find ways to consider it a higher good to refrain from doing anything they’d rather not do in the first place.
Similarly, a person who decides to be “reflectively consistent” will not only selectively perceive their acts of reflective consistency, they will also fail to observe the lopsided way in which they apply the concept, nor will they notice how their “reflective consistency” is not, in itself, achieving any other results or benefits for themselves or others.
Brains operate on the level of abstraction you give them, so the more abstract the goal, the less connected to reality the results will be, and the more wiggle room there will be for motivated reasoning and selective perception.
So in theory you can precommit to reflective consistency, but in practice you will only get an illusion of reflective consistency.
(Edit to add: If you’re still confused by this, it’s probably because you’re thinking about thinking, and I’m talking about actual behavior.)
I can’t speak for Vladimir, but from my perspective, this is much clearer now. Thanks!
(ETA: FWIW, while most of your comments on this post leave me with a sense that you have useful information to share, I’ve also found them somewhat frustrating, in that I really struggle to figure out exactly what it is. I don’t know if this is your writing style, my slow-wittedness, or just the fact that there’s a lot of inferential distance between us; but I just thought it might be useful for you to know.)
Since I’m trying to rapidly summarize a segment of what Robert Fritz took a couple of books to get across to me (“The Path of Least Resistance” and “Creating”), inferential distance is likely a factor.
It’s mostly his model of decisionmaking and commitment that I’m describing, with a few added twists of mine regarding the ranking bit, and the “worst that could happen” part, as well as links from it to the System 1⁄2 model. (And of course I’ve been talking about Fritz’s idea of the ideal-belief-reality-conflict in other threads, and that relates here as well.)
Basically, our conversation went like this:
You: People can’t be reflectively consistent.
Me: Yes they can, sometimes.
You: Of course they can.
Me: I’m confused.
You: Of course people can be reflectively consistent. But only in the dreamland. If you are still confused, it’s probably because you are still thinking about the dreamland, while I’m talking about reality.
I think pjeby’s point was that reflective consistency is a way of thinking—so if you commit to thinking in a reflectively consistent way, you will think in that way when you think, but you may still wind up not acting according to that kind of thoughts every time you would want to, because you’re not entirely likely to notice that you need to think them in the first place.
Reflective consistency is not about a way of thinking. Decision theory, considered in the simplest case, talks about properties of actions, including future actions, while ignoring properties of the algorithm generating the actions.
No, it went like this:
See also Abraham Lincoln’s, “If you call a tail a leg, how many legs does a dog have? Four, because calling a tail a leg doesn’t make it so.”
This is a diversion, but this has always struck me as a stupid answer to an even stupider question. I don’t really understand why people think it’s supposed to reveal some deep wisdom.
That’s Zen for you. ;-)
Seriously, the point (for me, anyhow) is that System 2 thinking routinely tries to call a tail a leg, and I think there’s a strong argument to be made that it’s an important part of what system 2 reasoning “evolved for”.
Huh? Reflective consistency is a property of behavior. If you behave as if you are reflectively consistent, you are.
And I am saying that a single precommitment to behaving in a reflectively consistent way, will not result in you actually behaving in the same way as you would if you individually committed to all of the specific decisions recommended by your abstract decision theory. Your perceptions and motivation will differ, and therefore your actual actions will differ.
People try to precommit in this fashion all the time, by adopting time management or organizational systems that purport to provide them with a consistent decision theory over some subdomain of decisions. They hope to then simply commit to that system, and thereby somehow escape the need for making (and committing to) the individual decisions. This doesn’t usually work very well, for reasons that have nothing to do with which decision theory they are attempting to adopt.
In my original comment, I specified that I only consider the situations “where the calculations are available”, that is you know (theoretically!) exactly what to do to be reflectively consistent in such situations and don’t need to achieve great artistic feats to pull that off.
You need to qualify what you are asserting, otherwise everything looks gray.
I’m asserting that people don’t actually do what they “decide” to do on the abstract level of System 2, unless certain System 1 processes are engaged with respect to the concrete, “near” aspects of the situation where the behavior is to be executed, and that merely precommitting to follow a certain decision theory is not a substitute for the actual, concrete, System 1commitment processes involved.
Now, could you commit to following a certain behavior under certain circumstances, that included the steps needed to also obtain System 1 commitment for the decision?
That I do not know. I think maybe you could. It would depend, I think, on how concretely you could define the circumstances when these steps would be taken… and doing that in a way that was both concrete and comprehensive would likely be difficult, which is why I’m not so sure about its feasibility.
Your model of human behavior doesn’t look in the least realistic to me, with its prohibition of reason, and requirements for difficult rituals of baptising reason into action.
Well, I suppose all the experiments that have been done on construal theory, and how concrete vs. abstract construal affects action and procrastination must be unrealistic, too, since that is a major piece of what I’m talking about here.
(If people were generally good at turning their reasoning into action, akrasia wouldn’t be such a hot topic here and in the rest of the world.)
Akrasia happens, but it’s not a universal mode. I object to you implying that akrasia is inevitable.
I never said it was inevitable. I said it happens when there are conflicts, and you haven’t really decided what to do about those conflicts, with enough detail and specificity for System 1 to automatically make the “right” choice in context. If you want different results, it’s up to you to specify them for yourself.