In terms of theory, I’m not sure if fixed vs. growth mindset is the best way to describe the comparison. I feel like there should be a better way to more precisely define the two concepts, but I’m not sure exactly how. I think the research is useful still despite my concerns although you’re more than welcome to argue it isn’t. Anyway, I’ve been wondering about this in terms of LessWrong. Does LessWrong as a community have a fixed-mindset? The praising for being smart vs. praising for effort distinction used made me wonder if LessWrong is more concerned with having intelligent discussions, and whether this interferes with improvement in rationality.
If I try to quickly taboo the words “fixed mindset” and “growth mindset”, the essential question is probably this:
Is the person aware (not verbally, but on the gut level) that their own skills could improve in the future, or do they implicitly assume that their skills will always stay the same?
It is a bit more complicated than this. For example, the person may deny the possibility of growth by refusing to classify something as a “skill”, because merely reframing something as a “skill” (as opposed to a “trait”) already suggests the possibility of improvement. For example, one person would say “I am introverted” where another person would say “my social skills of dealing with strangers are not good enough (yet)”. In other words, the person may reject not just the possibility of improving their own skill, but the idea of the trait being modifyable in general.
Also, this doesn’t have to apply generally. For example a stereotypical nerd may assume that you are able to learn programming, but that social skills are innate; while another person may assume that social behaviors are learned, but the talent to understand math or computers is innate. So one can have a “fixed mindset” in some areas and a “growth mindset” in others.
Does LessWrong as a community have a fixed-mindset?
Both/neither. The idea that humans can become more rational is central to the website. On the other hand, I guess everyone accepts that IQ is a thing. On the other other hand, transhumanists hope to overcome even those biological limits in a distant future.
But these are the professed beliefs. What do LessWrongers alieve? Not so sure here; but I’d guess that anyone who e.g. participated in a CFAR workshop has revealed the “growth mindset”. But it’s also possible that for some of them the “growth mindset” applies only in a narrow area.
Uhm, how about making a poll with more specific questions, such as “how much you believe you could improve in X” for various values of X such as “social skills” or “your job” or...?
Is the person aware (not verbally, but on the gut level) that their own skills could improve in the future, or do they implicitly assume that their skills will always stay the same?
I think that’s a good definition of the theory as Carol Dweck would define it; I’m just not so sure that’s the best definition of the experimental results. For instance, what precisely is gut level awareness? How would I test it experimentally if they can’t vocally express this awareness? Is the fixed mindset due to unawareness of the ability to improve or due to a desire to stay the same? Is it that the individual is aware they can improve, but simply is overestimating their own probability of getting worse or underestimating their probability of getting better? Is it an issue of avoidance of failure or is it a failure to approach goals? If I was to define the two terms, I might use something like:
fixed-mindset—When individuals are praised for their attributes, they are more likely to engage in behaviors intended to display or protect those attributes.
growth-mindset—When individuals are praised for their effort, they are more likely to engage in behaviors intended to improve their attributes.
But that’s rough. I’m not familiar with all the studies on the subject.
How would I test it experimentally if they can’t vocally express this awareness?
Just like you can run implicit racism tests I think you likely also can run texts where you let participants read various statements and measure their reactions.
fixed-mindset—When individuals are praised for their attributes, they are more likely to engage in behaviors intended to display or protect those attributes. growth-mindset—When individuals are praised for their effort, they are more likely to engage in behaviors intended to improve their attributes.
I think that points to part of the experiments but it doesn’t explain the whole concept.
Carol Dweck on fixed vs. growth mindsets
In terms of theory, I’m not sure if fixed vs. growth mindset is the best way to describe the comparison. I feel like there should be a better way to more precisely define the two concepts, but I’m not sure exactly how. I think the research is useful still despite my concerns although you’re more than welcome to argue it isn’t. Anyway, I’ve been wondering about this in terms of LessWrong. Does LessWrong as a community have a fixed-mindset? The praising for being smart vs. praising for effort distinction used made me wonder if LessWrong is more concerned with having intelligent discussions, and whether this interferes with improvement in rationality.
If I try to quickly taboo the words “fixed mindset” and “growth mindset”, the essential question is probably this:
Is the person aware (not verbally, but on the gut level) that their own skills could improve in the future, or do they implicitly assume that their skills will always stay the same?
It is a bit more complicated than this. For example, the person may deny the possibility of growth by refusing to classify something as a “skill”, because merely reframing something as a “skill” (as opposed to a “trait”) already suggests the possibility of improvement. For example, one person would say “I am introverted” where another person would say “my social skills of dealing with strangers are not good enough (yet)”. In other words, the person may reject not just the possibility of improving their own skill, but the idea of the trait being modifyable in general.
Also, this doesn’t have to apply generally. For example a stereotypical nerd may assume that you are able to learn programming, but that social skills are innate; while another person may assume that social behaviors are learned, but the talent to understand math or computers is innate. So one can have a “fixed mindset” in some areas and a “growth mindset” in others.
Both/neither. The idea that humans can become more rational is central to the website. On the other hand, I guess everyone accepts that IQ is a thing. On the other other hand, transhumanists hope to overcome even those biological limits in a distant future.
But these are the professed beliefs. What do LessWrongers alieve? Not so sure here; but I’d guess that anyone who e.g. participated in a CFAR workshop has revealed the “growth mindset”. But it’s also possible that for some of them the “growth mindset” applies only in a narrow area.
Uhm, how about making a poll with more specific questions, such as “how much you believe you could improve in X” for various values of X such as “social skills” or “your job” or...?
I think that’s a good definition of the theory as Carol Dweck would define it; I’m just not so sure that’s the best definition of the experimental results. For instance, what precisely is gut level awareness? How would I test it experimentally if they can’t vocally express this awareness? Is the fixed mindset due to unawareness of the ability to improve or due to a desire to stay the same? Is it that the individual is aware they can improve, but simply is overestimating their own probability of getting worse or underestimating their probability of getting better? Is it an issue of avoidance of failure or is it a failure to approach goals? If I was to define the two terms, I might use something like:
fixed-mindset—When individuals are praised for their attributes, they are more likely to engage in behaviors intended to display or protect those attributes. growth-mindset—When individuals are praised for their effort, they are more likely to engage in behaviors intended to improve their attributes.
But that’s rough. I’m not familiar with all the studies on the subject.
Just like you can run implicit racism tests I think you likely also can run texts where you let participants read various statements and measure their reactions.
I think that points to part of the experiments but it doesn’t explain the whole concept.