You never say what the problems are. So what if the subjects lie? Maybe “orientation” as measured by what people want to project is more useful than their actual behavior.
This reminds me of Shalizi’s complaints about IQ (not his complaints about intergroup differences that Johnicholas linked). One difference is that for SOI there’s a definite number (actual turnover of sex partners) that could be confused with it. But the name seems designed to discourage that confusion.
These are real problems, but it’s not obvious that the “basic sanity checks” you suggest would lead to better measures.
The problem is that aggregate SOI must be very similar for men as a group and for women as a group, in particular answers to questions 1-3 must be extremely close (as they measure behaviour, not orientation, and there’s one man and one woman in every sex pairing of the past year, every sex pairing of the next five years, and every one night stand, other than some tiny effects). The data makes it extremely obvious that they’re not, so there’s spectacular amount of systemic lying going on.
Doing anything with answers to questions you know people systemically lie about, you need to ask yourself what are you really measuring.
Yeah, there could be some lying going on (though there doesn’t have to be a “spectacular” amount; see Psychohistorian’s response).
However, just because people tend to lie about a certain behavior, it doesn’t make it useless to try to measure it. Rather than just giving up, psychologists often employ measures that will detect deceptiveness or social desirability bias such as the Marlowe-Crowne scale.
Doing anything with answers to questions you know people systemically lie about, you need to ask yourself what are you really measuring.
True. But at least in this case, people who underreport on this scale probably have less of what it’s actually trying to measure than people with the same behavior who report accurately. Since the SOI is about orientation, then how forthcoming and proud you are of the behavior it measures could be seen as part of that orientation.
For one amount of lying changes drastically depending on tiny details of how the test is administered. If you know about widespread lying is, and want to include it, you need to standardize testing conditions.
Doing anything with answers to questions you know people systemically lie about, you need to ask yourself what are you really measuring...
For one amount of lying changes drastically depending on tiny details of how the test is administered. If you know about widespread lying is, and want to include it, you need to standardize testing conditions.
Those are good points. They definitely could produce better measures.
Shalizi’s complaints are semi-valid, that if you throw a huge amount of somewhat correlated data at PCA, you will most likely get a small number of components, with one explaining most of the variance. And when you start removing data that doesn’t correlate highly enough (as obviously “testing something else”), the leading component will only seem statistically stronger.
I’m quite surprised but it mirrors very closely what I think about the Big Five personality traits—factors on their own don’t really prove anywhere as much as is commonly stated, and can as easily be statistical artifacts.
This criticism doesn’t mean that either IQ or big 5 are invalid, but it does mean that the case should be made for them independently of “they show up as big factors in PCA”. It seems to be so for IQ, and I’m not that terribly convinced it’s also true for the Big Five.
You never say what the problems are. So what if the subjects lie? Maybe “orientation” as measured by what people want to project is more useful than their actual behavior.
This reminds me of Shalizi’s complaints about IQ (not his complaints about intergroup differences that Johnicholas linked). One difference is that for SOI there’s a definite number (actual turnover of sex partners) that could be confused with it. But the name seems designed to discourage that confusion.
These are real problems, but it’s not obvious that the “basic sanity checks” you suggest would lead to better measures.
The problem is that aggregate SOI must be very similar for men as a group and for women as a group, in particular answers to questions 1-3 must be extremely close (as they measure behaviour, not orientation, and there’s one man and one woman in every sex pairing of the past year, every sex pairing of the next five years, and every one night stand, other than some tiny effects). The data makes it extremely obvious that they’re not, so there’s spectacular amount of systemic lying going on.
Doing anything with answers to questions you know people systemically lie about, you need to ask yourself what are you really measuring.
Yeah, there could be some lying going on (though there doesn’t have to be a “spectacular” amount; see Psychohistorian’s response).
However, just because people tend to lie about a certain behavior, it doesn’t make it useless to try to measure it. Rather than just giving up, psychologists often employ measures that will detect deceptiveness or social desirability bias such as the Marlowe-Crowne scale.
True. But at least in this case, people who underreport on this scale probably have less of what it’s actually trying to measure than people with the same behavior who report accurately. Since the SOI is about orientation, then how forthcoming and proud you are of the behavior it measures could be seen as part of that orientation.
For one amount of lying changes drastically depending on tiny details of how the test is administered. If you know about widespread lying is, and want to include it, you need to standardize testing conditions.
Those are good points. They definitely could produce better measures.
Shalizi’s complaints are semi-valid, that if you throw a huge amount of somewhat correlated data at PCA, you will most likely get a small number of components, with one explaining most of the variance. And when you start removing data that doesn’t correlate highly enough (as obviously “testing something else”), the leading component will only seem statistically stronger.
I’m quite surprised but it mirrors very closely what I think about the Big Five personality traits—factors on their own don’t really prove anywhere as much as is commonly stated, and can as easily be statistical artifacts.
This criticism doesn’t mean that either IQ or big 5 are invalid, but it does mean that the case should be made for them independently of “they show up as big factors in PCA”. It seems to be so for IQ, and I’m not that terribly convinced it’s also true for the Big Five.