So last time we took a look at the work of Stanovich and ideas coming out of the rationality debate. I tried to explicate the notion of ‘need for cognition’, talked a little bit more about problem finding and the generation of a problem nexus, and then also the affective component of that: wonder and curiosity and sort of balancing them off together.
Then we looked more specifically at Stanovich’s theory of foolishness, which he calls dysrationalia. We looked at the idea of dual processing (S1 and S2) and the idea that what makes you foolish is S1′s functioning (that makes you leap to conclusions) interferes with the inferential processing of S2. You leap to conclusions inappropriately, and that’s what causes you to be biased in your processing, self-deceptive, foolish, etc., and then what active open-mindedness does is it foregrounds S2 and protects it from undue interference from S1.
That’s all very good in a theoretical context, but we took a look at the work of Jacobs and Teasdale and said: but in a therapeutic context the opposite is the case! What you need is you need that machinery of leaping to work well, and we took a look at the work of Baker-Sennett and Ceci showing that cognitive leaping is actually very powerfully predictive of insight and that’s what you need in therapy. You need powerful kinds of insight to break you out of the ways in which you’re confronting existential entrapment, inertia, and ignorance. You cannot infer your way through transformative, qualitative change.
So I proposed (and Teasdale has also independently proposed this) that we need a cognitive style that foregrounds S1 (puts us into a state for triggering insight) and tends to background and constraint S2′s processing, and that’s mindfulness. We have evidence that mindfulness facilitates insight and mindfulness is also increasingly being incorporated into therapeutic settings precisely for its capacity to generate cognitive flexibility and afford insight. So we’re noticing is that because the relationship between S1 and S2 is opponent instead of adversarial, we’re going to need some higher-order way of coordinating these two cognitive styles, active open-mindedness and mindfulness, so that we can optimize the enhancement in rationality of the relevance realization that is at the core of our intelligence.
Note this idea: that how you are relating to your intelligence and applying your intelligence to itself, the degree to which you problematize your own intelligence and try to improve it, we can see that as rationality. Then I suggested to you that when I do this, when I recursively and reflectively use my rationality to enhance and optimize my rationality, perhaps by enhancing the relationship between the component styles of mindfulness and active open-mindedness, then I’m moving towards wisdom.
We took a look at that and in connection with this we took a look at the work of Dweck, and again making the argument that the way you relate to your higher cognitive processes (your meaning-making problem-solving capacity, not just the intellectual or information processing) is deeply existential. We saw the work on mindset and that the way you identify with your intelligence, the way you’re framing how you’re identifying with your intelligence, has a tremendous impact on your need for cognition, your problem-solving, your behavior, your proclivity towards deception, self-deception, etc.
The main bit of this episode that stuck with me was the reframing of growth mindset (see SSC’s commentary on it). Roughly, Vervaeke’s story is that the growth mindset studies are impressive (I think he’s a little too credulous but w/e), but also the evidence that intelligence (in the sense of IQ) is fixed is quite strong, and so having growth mindset about it is untenable. [If there’s a way to turn effort into having a higher g, we haven’t found it, despite lots of looking.] But when we split cognition into intelligence and rationality, it seems pretty obvious that it’s possible to turn effort into increased rationality, and growth mindset seems quite appropriate there.
but also the evidence that intelligence (in the sense of IQ) is fixed is quite strong, and so having growth mindset about it is untenable.
Is this true? Having looked into it, it doesn’t seem super true. Like, my guess is IQ is about as variable as competence measurements of most diverse skills. You can’t easily run any “did this intervention increase IQ?” studies, because IQ-tests are highly game-able, so we don’t actually have any specific studies of real interventions on this topic.
My current guess is that you can totally just increase IQ in a general sense, not many people do it because it requires deliberate practice, and I am kind of frustrated at everyone saying it’s fixed. The retest correlation of IQ is only like 0.8 after 20 years! That’s likely less than your retest correlation for basketball skills, or music instrument playing, or any of the other skills we think of as highly trainable. Of course, it’s less clear how to train IQ since we have less obvious feedback mechanisms, but I just don’t get where this myth of IQ being unchangeable comes from. We’ve even seen massive changes in population-wide IQ studies that correlate heavily with educational interventions in the form of the Flynn effect.
I’m not sure which claim this is, but I think in general the ability to game IQ tests is what they’re trying to test. [Obviously tests that cover more subskills will be more robust than tests that cover fewer subskills, performance on test day can be impacted by various negative factors that some people are more able to avoid than others, etc., but I don’t think this is that relevant for population-level comparisons.]
The retest correlation of IQ is only like 0.8 after 20 years!
So, note that there are roughly three stages: childhood, early adulthood, and late adulthood. We know of lots of interventions that increase childhood IQ, and also of the ‘fadeout’ effect that the effect of those interventions are short-lived. I don’t think there are that many that reliably affect adult IQ, and what we’re interested in is the retest correlation of IQ among adults.
In adulthood, things definitely change: generally for the worse. People make a big distinction between ‘fluid intelligence’ and ‘crystallized intelligence’, where fluid intelligence declines with age and crystallized intelligence increases (older people learn more slowly but know more facts and have more skills). What would be interesting (to me, at least) are increases (or slower decreases) on non-age-adjusted IQ scores. Variability on 20-year retest correlation could pretty easily be caused by aging more or less slowly than one’s cohort.
That’s almost certainly much less than your retest correlation for basketball skills
Hard to say, actually; I think the instantaneous retest correlation is higher for IQ tests than it is for basketball skill tests (according to a quick glance at some studies), and I haven’t yet found tests applied before and after an intervention (like a semester on a basketball team or w/e). We could get a better sense of this by looking at Elo scores over time for chessplayers, perhaps? [Chess is widely seen as trainable, and yet also has major ‘inborn’ variation that should show up in the statistics over time.]
We’ve even seen massive changes in population-wide IQ studies that correlate heavily with educational interventions in the form of the Flynn effect.
Lynn is pretty sure it’s not just education, as children before they enter school show the same sorts of improvements. This could, of course, still have education as an indirect cause, where (previous) education is intervening on the parents, and I personally would be surprised if education had no impact here, but I think it’s probably quite small (on fluid intelligence, at least).
I don’t think there are that many that reliably affect adult IQ, and what we’re interested in is the retest correlation of IQ among adults.
Yep. 0.8 is retest correlation among adults. Also, like, I don’t know of any big studies that tried to increase adult IQ with anything that doesn’t seem like it’s just obviously going to fail. There are lots of “here is a cheap intervention we can run for $50 per participant”, but those obviously don’t work for any task that already has substantial training time invested in it, or covers a large battery of tests.
Lynn is pretty sure it’s not just education, as children before they enter school show the same sorts of improvements.
Yep, definitely not just education. Also lots of other factors.
Hard to say, actually; I think the instantaneous retest correlation is higher for IQ tests than it is for basketball skill tests (according to a quick glance at some studies), and I haven’t yet found tests applied before and after an intervention (like a semester on a basketball team or w/e).
One of the problems here is that IQ is age-normalized. In absolute terms you are actually almost always seeing very substantial subcomponent drift and changes, the way they change just tend to be correlated among different individuals (i.e. people go through changing in similar ways at the same age). This exaggerates any retest-correlations compared to a thing like a basketball test, which wouldn’t be age-normalized.
To make my epistemic state here a bit more clear: I do think IQ is clearly less trainable than much narrower skills like “how many numbers can you memorize in a row?”. But I don’t think IQ is less trainable than any other set of complicated skills like “programming skill” or “architecture design” skill.
My current guess is that if you control for people who know how to program and you run a research program with about as much sophistication as current IQ studies on “can we improve people’s programming skills” you would find results that are about as convincing saying “no, you can’t improve people’s programming skill”. But this seems pretty dumb to me. We know of many groups that have substantially outperformed other groups in programming skill, and my inside-view here totally outweighs the relatively weak outside-view from the mediocre studies we are running. I also bet you would find that programming skill is really highly heritable (probably more heritable than IQ), and then people would go around saying that programming skill is genetic and can’t be changed, because everyone keeps confusing heritability with genetics and it’s terrible.
This doesn’t mean increasing programming skill is easy. It actually seems kind of hard, but it also doesn’t seem impossible, and from the perspective of a private individual “getting better at programming” is a totally reasonable thing to do, even if “make a large group of people much better at programming” is a really hard thing to do that I don’t have a ton of traction on. I feel similarly about IQ. “Getting better at whatever IQ tests are measuring” is a pretty reasonable thing to do. “Design a large scale scalable intervention that makes everyone much better” is much harder and I have much less traction on that.
Episode 42: Intelligence, Rationality, and Wisdom
The main bit of this episode that stuck with me was the reframing of growth mindset (see SSC’s commentary on it). Roughly, Vervaeke’s story is that the growth mindset studies are impressive (I think he’s a little too credulous but w/e), but also the evidence that intelligence (in the sense of IQ) is fixed is quite strong, and so having growth mindset about it is untenable. [If there’s a way to turn effort into having a higher g, we haven’t found it, despite lots of looking.] But when we split cognition into intelligence and rationality, it seems pretty obvious that it’s possible to turn effort into increased rationality, and growth mindset seems quite appropriate there.
Is this true? Having looked into it, it doesn’t seem super true. Like, my guess is IQ is about as variable as competence measurements of most diverse skills. You can’t easily run any “did this intervention increase IQ?” studies, because IQ-tests are highly game-able, so we don’t actually have any specific studies of real interventions on this topic.
My current guess is that you can totally just increase IQ in a general sense, not many people do it because it requires deliberate practice, and I am kind of frustrated at everyone saying it’s fixed. The retest correlation of IQ is only like 0.8 after 20 years! That’s likely less than your retest correlation for basketball skills, or music instrument playing, or any of the other skills we think of as highly trainable. Of course, it’s less clear how to train IQ since we have less obvious feedback mechanisms, but I just don’t get where this myth of IQ being unchangeable comes from. We’ve even seen massive changes in population-wide IQ studies that correlate heavily with educational interventions in the form of the Flynn effect.
I’m not sure which claim this is, but I think in general the ability to game IQ tests is what they’re trying to test. [Obviously tests that cover more subskills will be more robust than tests that cover fewer subskills, performance on test day can be impacted by various negative factors that some people are more able to avoid than others, etc., but I don’t think this is that relevant for population-level comparisons.]
So, note that there are roughly three stages: childhood, early adulthood, and late adulthood. We know of lots of interventions that increase childhood IQ, and also of the ‘fadeout’ effect that the effect of those interventions are short-lived. I don’t think there are that many that reliably affect adult IQ, and what we’re interested in is the retest correlation of IQ among adults.
In adulthood, things definitely change: generally for the worse. People make a big distinction between ‘fluid intelligence’ and ‘crystallized intelligence’, where fluid intelligence declines with age and crystallized intelligence increases (older people learn more slowly but know more facts and have more skills). What would be interesting (to me, at least) are increases (or slower decreases) on non-age-adjusted IQ scores. Variability on 20-year retest correlation could pretty easily be caused by aging more or less slowly than one’s cohort.
Hard to say, actually; I think the instantaneous retest correlation is higher for IQ tests than it is for basketball skill tests (according to a quick glance at some studies), and I haven’t yet found tests applied before and after an intervention (like a semester on a basketball team or w/e). We could get a better sense of this by looking at Elo scores over time for chessplayers, perhaps? [Chess is widely seen as trainable, and yet also has major ‘inborn’ variation that should show up in the statistics over time.]
Lynn is pretty sure it’s not just education, as children before they enter school show the same sorts of improvements. This could, of course, still have education as an indirect cause, where (previous) education is intervening on the parents, and I personally would be surprised if education had no impact here, but I think it’s probably quite small (on fluid intelligence, at least).
Yep. 0.8 is retest correlation among adults. Also, like, I don’t know of any big studies that tried to increase adult IQ with anything that doesn’t seem like it’s just obviously going to fail. There are lots of “here is a cheap intervention we can run for $50 per participant”, but those obviously don’t work for any task that already has substantial training time invested in it, or covers a large battery of tests.
Yep, definitely not just education. Also lots of other factors.
One of the problems here is that IQ is age-normalized. In absolute terms you are actually almost always seeing very substantial subcomponent drift and changes, the way they change just tend to be correlated among different individuals (i.e. people go through changing in similar ways at the same age). This exaggerates any retest-correlations compared to a thing like a basketball test, which wouldn’t be age-normalized.
To make my epistemic state here a bit more clear: I do think IQ is clearly less trainable than much narrower skills like “how many numbers can you memorize in a row?”. But I don’t think IQ is less trainable than any other set of complicated skills like “programming skill” or “architecture design” skill.
My current guess is that if you control for people who know how to program and you run a research program with about as much sophistication as current IQ studies on “can we improve people’s programming skills” you would find results that are about as convincing saying “no, you can’t improve people’s programming skill”. But this seems pretty dumb to me. We know of many groups that have substantially outperformed other groups in programming skill, and my inside-view here totally outweighs the relatively weak outside-view from the mediocre studies we are running. I also bet you would find that programming skill is really highly heritable (probably more heritable than IQ), and then people would go around saying that programming skill is genetic and can’t be changed, because everyone keeps confusing heritability with genetics and it’s terrible.
This doesn’t mean increasing programming skill is easy. It actually seems kind of hard, but it also doesn’t seem impossible, and from the perspective of a private individual “getting better at programming” is a totally reasonable thing to do, even if “make a large group of people much better at programming” is a really hard thing to do that I don’t have a ton of traction on. I feel similarly about IQ. “Getting better at whatever IQ tests are measuring” is a pretty reasonable thing to do. “Design a large scale scalable intervention that makes everyone much better” is much harder and I have much less traction on that.
I think laying out your thoughts on this would make a great top-level post. Starting from your comments here and then adding a bit more detail.
Do you happen to remember the source for this? I’m having trouble finding any studies that seem to bear directly on the question.