Something that occurred to me, inspired by many of the details of your story, was that actively seeking to cultivate rationality may internalize one’s locus of control.
Locus of control is a measurable psychological trait that ranges from “internal” to “external” where an internal locus roughly indicates that you think events in your life are primarily affected by your self, your plans, your choices, and your skills. You can measure it generally or for specific domains and an internal locus of control is associated with more interest and participation in politics, and better management of diabetes.
My initial hypothesis for any particular person (reversing the fundamental attribution error and out of considerations of inferential distance) is generally that their personal locus of control is a basically accurate assessment of their abilities within the larger context of their life. If someone lives in a violent and corrupt country and lacks money, guns, or muscles then an external locus of control is probably a cognitive aspect of their honest and effective strategy for surviving by “keeping their head down”. When I imagine trying to change someone’s locus of control with this background assumption, the critical thing seems likely to be changing their circumstances so that they are objectively less subject to random environmental stresses with things like corruption-reducing political reform, or creating protected opportunities to work and keep the fruits of their labor, or something else that directly and materially changes their personal prospects for success.
I’d always thought that locus of control had obvious connections to rationality, in that it seemed that a justifiably external locus of control would make it rational to not bother cultivating rationality. Significant efforts or careful planning are pointless if success and failure in life will be dominated by unpredictable factors that swoop in from “out there” to manipulate outcomes in unforeseen ways. If your ship’s destination will be determined by random winds that can tear your sails to shreds or speed you swiftly to a surprise destination, why bother making a map? The choice is pretty much just whether to get in the ship at all, and it’s probably a bad idea unless your current conditions are abysmal.
Your story makes me wonder about connections in the other direction, from rationality to locus of control. It seems plausible that cultivated rationality might teach people to notice patterns, to find points of leverage, and to see the ways that they can affect the things that matter to them. Rationality education might be a personal intervention that could internalize a person’s locus of control on the cheap, even without having substantial political influence or resources to direct their way.
More pragmatically, this makes me wonder if it would be useful to measure people’s locus of control before and a while after an intervention designed to improve rationality? I guess an alternative hypothesis is that you’ve been involved in meetups and your social environment might have improved? Perhaps any group of reasonably non-evil people could have helped just as well? I can’t think of any simple way off the top of my head to measure something that might help control for this factor...
It seems like it would be nice if “rationality itself” was the secret sauce, but “proving it for real” and then maybe optimizing based on the post-proof insights feels like something demanded by full thematic consistency :-)
Your story makes me wonder about connections in the other direction, from rationality to locus of control. It seems plausible that cultivated rationality might teach people to notice patterns, to find points of leverage, and to see the ways that they can affect the things that matter to them.
I would definitely say yes. There are people who have a tendency to think that if there’s any major component of randomness involved in something, then it’s pointless to try to make plans relating to that thing. Simply grokking expected utility and some very basic probability theory would help these people tremendously, while also shifting their locus of control inwards.
That’s really helpful. I can see that tendency even in my own attempt to explain what an external locus of control would feel like from the inside in emotionally compelling terms where I wrote:
If your ship’s destination will be determined by random winds that can tear your sails to shreds or speed you swiftly to a surprise destination, why bother making a map? The choice is pretty much just whether to get in the ship at all, and it’s probably a bad idea unless your current conditions are abysmal.
To be less dramatic and more balanced I should have said that the choice is whether to get in the ship at all, comparing the expected value of travel versus the expected value of one’s present circumstances, perhaps with a risk of ruin calculation to handle the different variances and valid risk aversion. My first wording revealed strong risk aversion and no implication of comparative calculation.
...Of course, now that I think about it, even that specific analogy suggests historical examples. People have literally been forced onto ships with little opportunity to research it or calculate expected values when they were to be sold as slaves or serve in the navy, or fight in the jungle. I can easily imagine that many of these people updated in the direction of an external locus of control, and later would “rationally expect” that cultivated rationality wouldn’t be that useful. By the same token, in those specific circumstances, cultivated rationality might have helped them avoid situations where they were likely to be press ganged?
But now we’re getting into “blaming the victim” territory with all the confusions inherent to politics. It makes me wonder if a strong desire to be sympathetic, translated into controversial political questions like these, limits a person’s likely appreciation for cultivated rationality? Maybe the (Gendlin ignoring) logic would run: “If I believed people could have predicted and avoided their current tragic circumstances, then it will be harder for me to be sympathetic, but I want to be sympathetic so I should not believe that people could have predicted and avoided their tragedy.”
Perhaps some kind of “active sympathy” techniques could make rationality training more useful and resilient in adverse social circumstances? I would guess that heart of the trick would be to reverse the latent fear (rather than simply reduce it) and show that irrationality actually tends to reduce effective sympathy, and cultivated rationality tends to increase it. Googling around I find empathic concern as a keyword, with measures being developed in the late 1970′s, and intervention efficacy happening by 2007 for things like couples therapy.
But now we’re getting into “blaming the victim” territory with all the confusions inherent to politics. It makes me wonder if a strong desire to be sympathetic, translated into controversial political questions like these, limits a person’s likely appreciation for cultivated rationality? Maybe the (Gendlin ignoring) logic would run: “If I believed people could have predicted and avoided their current tragic circumstances, then it will be harder for me to be sympathetic, but I want to be sympathetic so I should not believe that people could have predicted and avoided their tragedy.”
I think it is better to be sympathetic regardless of whether the “people could have predicted and avoided their current tragic circumstances” (whatever the counterfactual means, maybe that a more rational person facing the same problem would have predicted and avoided the problem?).
It is not always helping people, to save them from the consequences of their own actions; but I draw a moral line at capital punishment. If you’re dead, you can’t learn from your mistakes.
I am going to go ahead and push that moral line out to cover paralyzing loss of autonomy.
But now we’re getting into “blaming the victim” territory with all the confusions inherent to politics. It makes me wonder if a strong desire to be sympathetic, translated into controversial political questions like these, limits a person’s likely appreciation for cultivated rationality?
A little knowledge is a dangerous thing. Assume for a second the hypothesis is true: Slaves became slaves because Africa wasn’t rational enough. If we are sympathetic based on false beliefs, then we will not be able to offer them a true solution. We might offer them our sympathies, or be more willing to donate to their cause (even if it’s irrational), but we won’t be able to stop it from happening again.
If we believe that people could have avoided these tragedies through rationality (assuming this is true), then we automatically have the solution for avoiding these tragedies in the future. Just add rationality! It doesn’t matter how sympathetic we are if all we do with our sympathy is wander around, looking for the answer we’ve blinded ourselves to.
Sympathy is more than just feeling bad for the victim while you let them get exploited again and again. Sympathy is understanding the victim and having a desire to help. Clear, truthful understanding of all causes of victimization is a prerequisite for both of these to occur. You cannot understand a victim until you understand how they truly came to be a victim. You cannot provide meaningful help until you understand their role in the problem. Sympathy without rationality is just worthless pity.
Significant efforts or careful planning are pointless if success and failure in life will be dominated by unpredictable factors that swoop in from “out there” to manipulate outcomes in unforeseen ways. If your ship’s destination will be determined by random winds that can tear your sails to shreds or speed you swiftly to a surprise destination, why bother making a map? The choice is pretty much just whether to get in the ship at all, and it’s probably a bad idea unless your current conditions are abysmal.
I didn’t grok this much. Are you saying that rationality might not help people who will have an external locus of control regardless, or that you used to think this, or something different?
I’m saying that if someone really doesn’t have the ability to influence outcomes of personal interest, then it might really be senseless to make plans or worry about acting coherently. Someone might have an internal locus of control with respect to a slot machine, believing that their timing and bar-pulling-technique actually matter, and try to do statistically significant studies on which technique is best.
Maybe the person would discover a broken slot machine and discover how to game it? Its possible. But mostly they would just be crazy.
Wow. I wanted to say something like that, but this is waaaay better.
Your story makes me wonder about connections in the other direction, from rationality to locus of control.
I think that the shift probably has to do with framing things as you deciding to take actions which are linked to specific utilities, rather than things happening to you.
There seems to be an emphasis in lots of older philosophies (most Monotheistic Religions, Norse Mythology, Stoicism, Daoism) on external loci of control. I wonder how much of that of that is because they’re right, memetically infective, or just because people didn’t know how to control things well.
Hmm. I wonder if it’s worthwhile to make a distinction between external locus of control and absence of a locus of control; Stoic-style fatalism seems subtly different from Calvinist-style predestination, and somewhat more clearly distinguished from limited self-determination within a motivational landscape defined mainly by forces outside your control.
Yeah. I winced a bit when I clumped them together like that.
It seems to me that Stoicism asserts that your locus of control over external events is external, but that you can control yourself and by going along with Nature and in doing so eliminate your suffering.
Something that occurred to me, inspired by many of the details of your story, was that actively seeking to cultivate rationality may internalize one’s locus of control.
Locus of control is a measurable psychological trait that ranges from “internal” to “external” where an internal locus roughly indicates that you think events in your life are primarily affected by your self, your plans, your choices, and your skills. You can measure it generally or for specific domains and an internal locus of control is associated with more interest and participation in politics, and better management of diabetes.
My initial hypothesis for any particular person (reversing the fundamental attribution error and out of considerations of inferential distance) is generally that their personal locus of control is a basically accurate assessment of their abilities within the larger context of their life. If someone lives in a violent and corrupt country and lacks money, guns, or muscles then an external locus of control is probably a cognitive aspect of their honest and effective strategy for surviving by “keeping their head down”. When I imagine trying to change someone’s locus of control with this background assumption, the critical thing seems likely to be changing their circumstances so that they are objectively less subject to random environmental stresses with things like corruption-reducing political reform, or creating protected opportunities to work and keep the fruits of their labor, or something else that directly and materially changes their personal prospects for success.
I’d always thought that locus of control had obvious connections to rationality, in that it seemed that a justifiably external locus of control would make it rational to not bother cultivating rationality. Significant efforts or careful planning are pointless if success and failure in life will be dominated by unpredictable factors that swoop in from “out there” to manipulate outcomes in unforeseen ways. If your ship’s destination will be determined by random winds that can tear your sails to shreds or speed you swiftly to a surprise destination, why bother making a map? The choice is pretty much just whether to get in the ship at all, and it’s probably a bad idea unless your current conditions are abysmal.
Your story makes me wonder about connections in the other direction, from rationality to locus of control. It seems plausible that cultivated rationality might teach people to notice patterns, to find points of leverage, and to see the ways that they can affect the things that matter to them. Rationality education might be a personal intervention that could internalize a person’s locus of control on the cheap, even without having substantial political influence or resources to direct their way.
More pragmatically, this makes me wonder if it would be useful to measure people’s locus of control before and a while after an intervention designed to improve rationality? I guess an alternative hypothesis is that you’ve been involved in meetups and your social environment might have improved? Perhaps any group of reasonably non-evil people could have helped just as well? I can’t think of any simple way off the top of my head to measure something that might help control for this factor...
It seems like it would be nice if “rationality itself” was the secret sauce, but “proving it for real” and then maybe optimizing based on the post-proof insights feels like something demanded by full thematic consistency :-)
I would definitely say yes. There are people who have a tendency to think that if there’s any major component of randomness involved in something, then it’s pointless to try to make plans relating to that thing. Simply grokking expected utility and some very basic probability theory would help these people tremendously, while also shifting their locus of control inwards.
That’s really helpful. I can see that tendency even in my own attempt to explain what an external locus of control would feel like from the inside in emotionally compelling terms where I wrote:
To be less dramatic and more balanced I should have said that the choice is whether to get in the ship at all, comparing the expected value of travel versus the expected value of one’s present circumstances, perhaps with a risk of ruin calculation to handle the different variances and valid risk aversion. My first wording revealed strong risk aversion and no implication of comparative calculation.
...Of course, now that I think about it, even that specific analogy suggests historical examples. People have literally been forced onto ships with little opportunity to research it or calculate expected values when they were to be sold as slaves or serve in the navy, or fight in the jungle. I can easily imagine that many of these people updated in the direction of an external locus of control, and later would “rationally expect” that cultivated rationality wouldn’t be that useful. By the same token, in those specific circumstances, cultivated rationality might have helped them avoid situations where they were likely to be press ganged?
But now we’re getting into “blaming the victim” territory with all the confusions inherent to politics. It makes me wonder if a strong desire to be sympathetic, translated into controversial political questions like these, limits a person’s likely appreciation for cultivated rationality? Maybe the (Gendlin ignoring) logic would run: “If I believed people could have predicted and avoided their current tragic circumstances, then it will be harder for me to be sympathetic, but I want to be sympathetic so I should not believe that people could have predicted and avoided their tragedy.”
Perhaps some kind of “active sympathy” techniques could make rationality training more useful and resilient in adverse social circumstances? I would guess that heart of the trick would be to reverse the latent fear (rather than simply reduce it) and show that irrationality actually tends to reduce effective sympathy, and cultivated rationality tends to increase it. Googling around I find empathic concern as a keyword, with measures being developed in the late 1970′s, and intervention efficacy happening by 2007 for things like couples therapy.
I think it is better to be sympathetic regardless of whether the “people could have predicted and avoided their current tragic circumstances” (whatever the counterfactual means, maybe that a more rational person facing the same problem would have predicted and avoided the problem?).
Like Eliezer says:
I am going to go ahead and push that moral line out to cover paralyzing loss of autonomy.
“We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition...”
A little knowledge is a dangerous thing. Assume for a second the hypothesis is true: Slaves became slaves because Africa wasn’t rational enough. If we are sympathetic based on false beliefs, then we will not be able to offer them a true solution. We might offer them our sympathies, or be more willing to donate to their cause (even if it’s irrational), but we won’t be able to stop it from happening again.
If we believe that people could have avoided these tragedies through rationality (assuming this is true), then we automatically have the solution for avoiding these tragedies in the future. Just add rationality! It doesn’t matter how sympathetic we are if all we do with our sympathy is wander around, looking for the answer we’ve blinded ourselves to.
Sympathy is more than just feeling bad for the victim while you let them get exploited again and again. Sympathy is understanding the victim and having a desire to help. Clear, truthful understanding of all causes of victimization is a prerequisite for both of these to occur. You cannot understand a victim until you understand how they truly came to be a victim. You cannot provide meaningful help until you understand their role in the problem. Sympathy without rationality is just worthless pity.
I didn’t grok this much. Are you saying that rationality might not help people who will have an external locus of control regardless, or that you used to think this, or something different?
I’m saying that if someone really doesn’t have the ability to influence outcomes of personal interest, then it might really be senseless to make plans or worry about acting coherently. Someone might have an internal locus of control with respect to a slot machine, believing that their timing and bar-pulling-technique actually matter, and try to do statistically significant studies on which technique is best.
Maybe the person would discover a broken slot machine and discover how to game it? Its possible. But mostly they would just be crazy.
Wow. I wanted to say something like that, but this is waaaay better.
I think that the shift probably has to do with framing things as you deciding to take actions which are linked to specific utilities, rather than things happening to you.
There seems to be an emphasis in lots of older philosophies (most Monotheistic Religions, Norse Mythology, Stoicism, Daoism) on external loci of control. I wonder how much of that of that is because they’re right, memetically infective, or just because people didn’t know how to control things well.
Hmm. I wonder if it’s worthwhile to make a distinction between external locus of control and absence of a locus of control; Stoic-style fatalism seems subtly different from Calvinist-style predestination, and somewhat more clearly distinguished from limited self-determination within a motivational landscape defined mainly by forces outside your control.
Yeah. I winced a bit when I clumped them together like that.
It seems to me that Stoicism asserts that your locus of control over external events is external, but that you can control yourself and by going along with Nature and in doing so eliminate your suffering.