Two studies explored the role of implicit theories of intelligence in adolescents’
mathematics achievement. In Study 1 with 373 7th graders, the belief that intelligence is malleable (incremental theory) predicted an upward trajectory in grades over the two years of junior high school, while a belief that intelligence is fixed (entity theory) predicted a flat trajectory. A mediational model including learning goals, positive beliefs about effort, and causal attributions and strategies was tested. In Study 2, an intervention teaching an incremental theory to 7th graders (N=48) promoted positive change in classroom motivation, compared with a control group (N=43). Simultaneously, students in the control group displayed a continuing downward trajectory in grades, while this decline was reversed for students in the experimental group.
People on lesswrong commonly talk as if intelligence is a thing we can put a number too, which implies a fixed trait. Yet that is counter productive in children. Is this another example of a useful lie? I feel that this issue is at the core of some of the arguments I have had over the years.
Fair point. Would you agree with, “People on lesswrong commonly talk as if intelligence is a thing we can put a number to (without temporal qualification), which implies a fixed trait.”?
We often say our weight is currently X or Y. But people rarely say their IQ is currently Z, at least in my experience.
Would you agree with, “People on lesswrong commonly talk as if intelligence is a thing we can put a number to (without temporal qualification), which implies a fixed trait.”?
If it works, it can’t be a lie. In any case, surely a sophisticated understanding does not say that intelligence is malleable or not-malleable. Rather, we say it’s malleable to this-and-such an extent in such-and-these aspects by these-and-such methods.
“Intelligence is malleable” can be a lie and still work. Kids who believe their general intelligence to be malleable might end up exercising domain-specific skills and a general perseverance so that they don’t get too easily discouraged. That leaves their general intelligence unchanged, but nonetheless improves school performance.
I was thinking of the more mathematical definitions of intelligence that just give a scalar average performance over lots of different worlds. They can still be consistant as they track the history and agents might do better in worlds they believe that their intelligence changes. As they might do better in worlds where they are given calculators.
If simple things like the ownership of calculators can change your intelligence, is it right to think of it as something stable you can apply fission like exponential growth on.
I found this interesting and the paper it discusses children’s conception of intelligence.
The abstract to the article
People on lesswrong commonly talk as if intelligence is a thing we can put a number too, which implies a fixed trait. Yet that is counter productive in children. Is this another example of a useful lie? I feel that this issue is at the core of some of the arguments I have had over the years.
No, it doesn’t. What about weight?
Fair point. Would you agree with, “People on lesswrong commonly talk as if intelligence is a thing we can put a number to (without temporal qualification), which implies a fixed trait.”?
We often say our weight is currently X or Y. But people rarely say their IQ is currently Z, at least in my experience.
Yes.
If it works, it can’t be a lie. In any case, surely a sophisticated understanding does not say that intelligence is malleable or not-malleable. Rather, we say it’s malleable to this-and-such an extent in such-and-these aspects by these-and-such methods.
“Intelligence is malleable” can be a lie and still work. Kids who believe their general intelligence to be malleable might end up exercising domain-specific skills and a general perseverance so that they don’t get too easily discouraged. That leaves their general intelligence unchanged, but nonetheless improves school performance.
I was thinking of the more mathematical definitions of intelligence that just give a scalar average performance over lots of different worlds. They can still be consistant as they track the history and agents might do better in worlds they believe that their intelligence changes. As they might do better in worlds where they are given calculators.
If simple things like the ownership of calculators can change your intelligence, is it right to think of it as something stable you can apply fission like exponential growth on.
No, it doesn’t. What about weight?