Correcting for correspondence bias means that more weight should be given to the situational explanation than the dispositional explanation, that I’m the sort of person who writes stupid articles that ramble on.
I may have misunderstood you here, but I interpret the correspondence bias differently. Correcting for it doesn’t mean you should necessarily always put more weight on the situational explanation than the personality, which your example clearly shows would sometimes lead to mistakes. It means that you mostly don’t give it as much weight as you should.
The Correspondence Bias is a bias that, on the whole, helps people arrive at more accurate, rather than less accurate, conclusions, and should be corrected with care to improving accuracy and correctness, rather than the mere elimination of bias.
I think it’s useful to think of each bias as isolated. Correcting for the correspondence bias should always make you more accurate, because it’s defined relatively to what’s true. It doesn’t talk about comparing people with yourself. However it mostly might not make sense to think of it this way in practice, since interpreting others’ actions rarely happens without some sort of comparison with what you would do in the same situation. I wouldn’t be surprised if there’s a significant correlation between FAE and the opposite bias of underestimating the importance of your own personality in how you react to things.
Does this sound like something you could agree with?
I may have misunderstood you here, but I interpret the correspondence bias differently. Correcting for it doesn’t mean you should necessarily always put more weight on the situational explanation than the personality, which your example clearly shows would sometimes lead to mistakes. It means that you mostly don’t give it as much weight as you should.
The context by which the correspondence bias tends to be assessed, however, are in artificial environments where it leads to incorrect conclusions. How do we judge whether we give the correct weight or not?
Does this sound like something you could agree with?
I have no idea where to place my priors on the possibility of a strong correlation; I’d guess that low rationalization is associated both with high and low FAE (owing to virtue ethics on one tail and rationalists on the other), and that the middle is a bit of a wash. My inclination is to look for studies. Know of any?
I may have misunderstood you here, but I interpret the correspondence bias differently. Correcting for it doesn’t mean you should necessarily always put more weight on the situational explanation than the personality, which your example clearly shows would sometimes lead to mistakes. It means that you mostly don’t give it as much weight as you should.
I think it’s useful to think of each bias as isolated. Correcting for the correspondence bias should always make you more accurate, because it’s defined relatively to what’s true. It doesn’t talk about comparing people with yourself. However it mostly might not make sense to think of it this way in practice, since interpreting others’ actions rarely happens without some sort of comparison with what you would do in the same situation. I wouldn’t be surprised if there’s a significant correlation between FAE and the opposite bias of underestimating the importance of your own personality in how you react to things.
Does this sound like something you could agree with?
The context by which the correspondence bias tends to be assessed, however, are in artificial environments where it leads to incorrect conclusions. How do we judge whether we give the correct weight or not?
I have no idea where to place my priors on the possibility of a strong correlation; I’d guess that low rationalization is associated both with high and low FAE (owing to virtue ethics on one tail and rationalists on the other), and that the middle is a bit of a wash. My inclination is to look for studies. Know of any?