The CIA report, which I haven’t read, is pretty hard to fake. But it may not be so useful if we don’t know how good is the baseline of CIA analysts and the questions. Tetlock says that it’s very easy to write very simple computer programs that beat all of the people he assessed.
I read somewhere that both the CIA analysts and Bueno de Mesquita got 90%, whatever that means, but that he was more precise. I don’t know how well-calibrated 90% is, but I suspect that the CIA analysts had more humility than any of Tetlock’s experts, perhaps because they considered themselves cogs, rather than celebrities.
I assume he really did publish lots of predictions. But I wouldn’t be shocked if he’s blatantly lying and they’re just wrong. Here’s an example of someone contradicted about his predictions by his publications. In 1987, Michael Lewis begins his epilogue to Liar’s Poker with “I didn’t think the firm was doomed. I didn’t think that Wall Street would collapse.” Today, he writes “In the two decades since then, I had been waiting for the end of Wall Street.” These quotes are not entirely fair, being cherry-picked from the midst of qualifications and explanations. But the similar language is striking.
(When I invoke Tetlock, I’m talking about his book. I don’t know what he wrote about BdM.)
Tetlock says that it’s very easy to write very simple computer programs that beat all of the people he assessed.
This is probably related to the fallacy of college admissions. Everyone in admissions thinks they can do better a job of predicting college success by using more criteria and their own professional judgement, and in every test using the SAT alone does better in aggregate.
in every test using the SAT alone does better in aggregate.
This is emphatically wrong. From the most pro-SAT source I can think of, SAT’s + High School GPA are the most accurate predictor of first-year grades available. I admit the possibility that if you use a different measure of college success, my point is not valid.
It may be comforting to overstate the value of standardized testing, but that does not make it true, or excuse unfounded generalizations. One should be particularly careful to avoid such generalizations in a blue-green issue like the value of intelligence testing.
I think billswift’s point was that the SAT is a better predictor as opposed to using human judgement as in reading the application letter, etc…
It seems plausible that adding another “objective” measure like the HSGPA will further increase accuracy. But the big point is that using human judgement will DECREASE accuracy.
The really interesting paper is the following as suggested by Alicorn:
Michael Bishop, “In Praise of Epistemic Irresponsibility: How Lazy And Ignorant Can You Be?”
I don’t remember my sources, but I came across this in several published books in the 1990s (I think they were from the late 1980s and early 90s but I’m not sure). This was before they weakened the SAT, so maybe that accounts for the difference, or maybe you’re going by ad copy, rather than cognitive psych reports, which are attempting to show the value of the tests without offending the consumers of the tests.
The CIA report, which I haven’t read, is pretty hard to fake. But it may not be so useful if we don’t know how good is the baseline of CIA analysts and the questions. Tetlock says that it’s very easy to write very simple computer programs that beat all of the people he assessed.
I read somewhere that both the CIA analysts and Bueno de Mesquita got 90%, whatever that means, but that he was more precise. I don’t know how well-calibrated 90% is, but I suspect that the CIA analysts had more humility than any of Tetlock’s experts, perhaps because they considered themselves cogs, rather than celebrities.
I assume he really did publish lots of predictions. But I wouldn’t be shocked if he’s blatantly lying and they’re just wrong. Here’s an example of someone contradicted about his predictions by his publications. In 1987, Michael Lewis begins his epilogue to Liar’s Poker with “I didn’t think the firm was doomed. I didn’t think that Wall Street would collapse.” Today, he writes “In the two decades since then, I had been waiting for the end of Wall Street.” These quotes are not entirely fair, being cherry-picked from the midst of qualifications and explanations. But the similar language is striking.
(When I invoke Tetlock, I’m talking about his book. I don’t know what he wrote about BdM.)
This is probably related to the fallacy of college admissions. Everyone in admissions thinks they can do better a job of predicting college success by using more criteria and their own professional judgement, and in every test using the SAT alone does better in aggregate.
For more on this and related phenomena, see Michael Bishop, “In Praise of Epistemic Irresponsibility: How Lazy And Ignorant Can You Be?”
Great paper, I think it warrants a posting here on lw.
I will probably have time to do this tomorrow or the day after. It would probably be a suitable topic.
This is emphatically wrong. From the most pro-SAT source I can think of, SAT’s + High School GPA are the most accurate predictor of first-year grades available. I admit the possibility that if you use a different measure of college success, my point is not valid.
It may be comforting to overstate the value of standardized testing, but that does not make it true, or excuse unfounded generalizations. One should be particularly careful to avoid such generalizations in a blue-green issue like the value of intelligence testing.
I think billswift’s point was that the SAT is a better predictor as opposed to using human judgement as in reading the application letter, etc… It seems plausible that adding another “objective” measure like the HSGPA will further increase accuracy. But the big point is that using human judgement will DECREASE accuracy. The really interesting paper is the following as suggested by Alicorn:
Michael Bishop, “In Praise of Epistemic Irresponsibility: How Lazy And Ignorant Can You Be?”
I don’t remember my sources, but I came across this in several published books in the 1990s (I think they were from the late 1980s and early 90s but I’m not sure). This was before they weakened the SAT, so maybe that accounts for the difference, or maybe you’re going by ad copy, rather than cognitive psych reports, which are attempting to show the value of the tests without offending the consumers of the tests.