These individuals would have an incentive to cheat the tests to get into the country.
The incentive is there, but how much cheating would follow? Teenagers taking GCSE & A-level exams have incentives to cheat too, but the observed rate of exam malpractice is nonetheless very low, about 0.02%. No doubt some cheating isn’t caught, but even if all malpractice were cheating, and 99% of cheating went undetected, the cheat rate would be a scant 2%.
More generally, is the potential for cheating the true objection here? (It seems worth asking that rather than silently downvoting, troll toll be damned.) Unless cheating were really pervasive, raising the IQ threshold for entry could maintain the average IQ of immigrants granted entry.
It’s very easy to inflate your score on an IQ test by prepping. They’re designed to be taken without any familiarity of the material or context. I don’t know exactly how much you can eke out by studying say, Raven’s Matrices, but it’s large enough that the predictive value of the tests would drop like a stone. In contrast, GCSE/A-Level exams are designed knowing that students spend a great deal of effort studying and revising for them.
If an IQ test were developed that had the retest effect as a feature rather than a bug, I’d be more in favor of using them for immigrants.
Ah, I’d interpreted “cheating” to mean nefarious activity taking place during or after the test, not pre-test coaching or preparation.
It’s very easy to inflate your score on an IQ test by prepping.
This much is true. But
I don’t know exactly how much you can eke out by studying say, Raven’s Matrices, it’s large enough that the predictive value of the tests would drop like a stone.
is probably false. There’re three reasons why I say that.
In the real world, IQ & IQ-like tests appear to work as usual, even when taken by thousands of people who can prep as much as they like. The US Armed Forces are content to test a million people a year with the ASVAB, despite the proliferation of ASVAB prepping resources. As another example, standardized tests like the GRE predict graduate students’ GPA, faculty ratings, and even the number of citations to their publications; this is all the more impressive considering the range restriction of ability among the prospective students taking the tests!
Logically, prep-induced score boosts don’t necessarily imply a drop in predictive validity. If people who started with high scores gained more from prepping than people who started with low scores, a test’s predictive validity could go up, because widening the gap between high- & low-scorers can improve the test’s ability to distinguish the two groups. And there are cases where high-scorers gainedmore from practising, although the effect on predictive validity as such doesn’t look like it was measured in those studies.
One can also look at how much practice reduces the g loading of IQ tests. It looks like the reduction in g loading is typically small. This review article gives various examples:
Neubauer and Freudenthaler (1994) showed that after 9 h of practice the g loading of a modestly complex intelligence test dropped from .46 to .39. Te Nijenhuis, Voskuijl, and Schijve (2001) showed that after various forms of test preparation the g loadedness of their test battery decreased from .53 to .49. [pages 284-285]
Using the combined experimental and control group, a principle axis factor analysis on the pretest and posttest scores, respectively, resulted in a first unrotated factor explaining 22% of the variance in the pretest scores and 18% of the variance in the posttest scores. [page 294]
That last result comes from a study of South African psychology students, mostly non-white, some of whom were randomly assigned to “mediated learning” training; all of them were tested twice with none other than Raven’s Standard Progressive Matrices.
The incentive is there, but how much cheating would follow? Teenagers taking GCSE & A-level exams have incentives to cheat too, but the observed rate of exam malpractice is nonetheless very low, about 0.02%. No doubt some cheating isn’t caught, but even if all malpractice were cheating, and 99% of cheating went undetected, the cheat rate would be a scant 2%.
More generally, is the potential for cheating the true objection here? (It seems worth asking that rather than silently downvoting, troll toll be damned.) Unless cheating were really pervasive, raising the IQ threshold for entry could maintain the average IQ of immigrants granted entry.
It’s very easy to inflate your score on an IQ test by prepping. They’re designed to be taken without any familiarity of the material or context. I don’t know exactly how much you can eke out by studying say, Raven’s Matrices, but it’s large enough that the predictive value of the tests would drop like a stone. In contrast, GCSE/A-Level exams are designed knowing that students spend a great deal of effort studying and revising for them.
If an IQ test were developed that had the retest effect as a feature rather than a bug, I’d be more in favor of using them for immigrants.
Ah, I’d interpreted “cheating” to mean nefarious activity taking place during or after the test, not pre-test coaching or preparation.
This much is true. But
is probably false. There’re three reasons why I say that.
In the real world, IQ & IQ-like tests appear to work as usual, even when taken by thousands of people who can prep as much as they like. The US Armed Forces are content to test a million people a year with the ASVAB, despite the proliferation of ASVAB prepping resources. As another example, standardized tests like the GRE predict graduate students’ GPA, faculty ratings, and even the number of citations to their publications; this is all the more impressive considering the range restriction of ability among the prospective students taking the tests!
Logically, prep-induced score boosts don’t necessarily imply a drop in predictive validity. If people who started with high scores gained more from prepping than people who started with low scores, a test’s predictive validity could go up, because widening the gap between high- & low-scorers can improve the test’s ability to distinguish the two groups. And there are cases where high-scorers gained more from practising, although the effect on predictive validity as such doesn’t look like it was measured in those studies.
One can also look at how much practice reduces the g loading of IQ tests. It looks like the reduction in g loading is typically small. This review article gives various examples:
That last result comes from a study of South African psychology students, mostly non-white, some of whom were randomly assigned to “mediated learning” training; all of them were tested twice with none other than Raven’s Standard Progressive Matrices.
I stand corrected, thanks for the links!