Since when did greater rigour and averaging of more problems imply greater degree of correlation with performance at one specific job?
I call halo effect here. Greater rigour, bigger number, more accurate, more corrected, all combined really ‘good’ qualities about the GPA value spill over into your feeling of how well it’ll correlate with performance at specific job, versus a ‘bad’ ill measured value.
Truth is, say, ill measured hand size based on eyeballing can easily correlate better with measured finger length, than body weight measured using ultra high precision scientific scales with accuracy of a milligram (microgram, nanogram, whatever). Just because hammer is a tool you build things with, and butter knife is a kitchen utensil, doesn’t make hammer better than butter knife as a screw driver.
Just because hammer is a tool you build things with, and butter knife is a kitchen utensil, doesn’t make hammer better than butter knife as a screw driver.
But more on point, you’d need to justify that the test you give is more correlated than GPA with performance—this is why I support simple programming tests (because they demonstrably are more correlated than academic indicators) but for a ‘clerical assistant’ position as described above, a specific test doesn’t immediately spring to mind, and so it’s suspect.
You aren’t looking for ‘correlation’ usually, you’re looking for screening out the serial job applicant who can’t do the job they’re applying for (and keeps re-applying to many places)… just ask ’em to do some work similar to what they will be doing as per LorenzofromOz method, and you’ll at least be assured they can do work. While with GPA you won’t be assured of anything what so ever.
For the programming, the simplest dumbest check works to screen out those entirely incapable, when screening by PhD would not.
PhD might correlate better with performance than fizzbuzz does (the latter being a binary test of extremely basic knowledge), but PhD does not screen out those who will just waste your time, and fizzbuzz (your personal variation of it) does.
Holy crap… I think I had read about the FizzBuzz thing a while ago, but I didn’t remember about the 199 in 200 thing… Would it be possible to sue the institutions issuing those PhD or something? :-)
Well, I don’t know what % of the CS-related PhDs can’t do FizzBuzz, maybe the percentage is rather small. (Also, sue for what? You are not their client. The incapable dude that was given a degree, that’s their client. Your over-valuation of this degree as evidence of capability is your own problem)
The issue is that, as Joel explains, the job applicants are a sample extremely biased towards incompetence:
[Though I would think that the incompetents with degrees would be more able to find incompetent employer to work at. And PhDs should be able to find a company that hires PhDs for signalling reasons]
The issue with the hiring methods here, is that we easily confuse “more accurate measurement of X” with “stronger correlation to Y”, and “stronger correlation to Y” with hiring better staff (the one that doesn’t sink your company), usually out of some dramatically different population than the one on which correlation was found.
Furthermore, a ‘correlation’ is such an inexact measure of how test relates to performance. Comparing correlations is like comparing apples to oranges by weight. The ‘fizzbuzz’ style problems measure performance near the absolute floor level, but with very high reliability. Virtually no-one who fails fizzbuzz is a good hire. Virtually no-one who passes fizzbuzz (an unique fizzbuzz, not the popular one) is completely incapable of programming. The degrees correlate to performance at the higher level, but with very low reliability—there are brilliant people with degrees, there are complete incompetents with degrees, there’s brilliant people and incompetents without degrees.
Reversing a linked list is a good one unless the candidate knows how to. See, the issue is that educational institutions don’t teach how to think up a way to reverse linked list. Nor do they test for that. They might teach how to reverse the linked list, then they might test if the person can reverse the linked list. Some people learn to think of a way to solve such problems. Some don’t. It’s entirely incidental.
Since when did greater rigour and averaging of more problems imply greater degree of correlation with performance at one specific job?
I call halo effect here. Greater rigour, bigger number, more accurate, more corrected, all combined really ‘good’ qualities about the GPA value spill over into your feeling of how well it’ll correlate with performance at specific job, versus a ‘bad’ ill measured value.
Truth is, say, ill measured hand size based on eyeballing can easily correlate better with measured finger length, than body weight measured using ultra high precision scientific scales with accuracy of a milligram (microgram, nanogram, whatever). Just because hammer is a tool you build things with, and butter knife is a kitchen utensil, doesn’t make hammer better than butter knife as a screw driver.
Well, actually...
But more on point, you’d need to justify that the test you give is more correlated than GPA with performance—this is why I support simple programming tests (because they demonstrably are more correlated than academic indicators) but for a ‘clerical assistant’ position as described above, a specific test doesn’t immediately spring to mind, and so it’s suspect.
You aren’t looking for ‘correlation’ usually, you’re looking for screening out the serial job applicant who can’t do the job they’re applying for (and keeps re-applying to many places)… just ask ’em to do some work similar to what they will be doing as per LorenzofromOz method, and you’ll at least be assured they can do work. While with GPA you won’t be assured of anything what so ever.
For the programming, the simplest dumbest check works to screen out those entirely incapable, when screening by PhD would not.
http://www.codinghorror.com/blog/2007/02/why-cant-programmers-program.html
PhD might correlate better with performance than fizzbuzz does (the latter being a binary test of extremely basic knowledge), but PhD does not screen out those who will just waste your time, and fizzbuzz (your personal variation of it) does.
Holy crap… I think I had read about the FizzBuzz thing a while ago, but I didn’t remember about the 199 in 200 thing… Would it be possible to sue the institutions issuing those PhD or something? :-)
Well, I don’t know what % of the CS-related PhDs can’t do FizzBuzz, maybe the percentage is rather small. (Also, sue for what? You are not their client. The incapable dude that was given a degree, that’s their client. Your over-valuation of this degree as evidence of capability is your own problem)
The issue is that, as Joel explains, the job applicants are a sample extremely biased towards incompetence:
http://www.joelonsoftware.com/items/2005/01/27.html
[Though I would think that the incompetents with degrees would be more able to find incompetent employer to work at. And PhDs should be able to find a company that hires PhDs for signalling reasons]
The issue with the hiring methods here, is that we easily confuse “more accurate measurement of X” with “stronger correlation to Y”, and “stronger correlation to Y” with hiring better staff (the one that doesn’t sink your company), usually out of some dramatically different population than the one on which correlation was found.
Furthermore, a ‘correlation’ is such an inexact measure of how test relates to performance. Comparing correlations is like comparing apples to oranges by weight. The ‘fizzbuzz’ style problems measure performance near the absolute floor level, but with very high reliability. Virtually no-one who fails fizzbuzz is a good hire. Virtually no-one who passes fizzbuzz (an unique fizzbuzz, not the popular one) is completely incapable of programming. The degrees correlate to performance at the higher level, but with very low reliability—there are brilliant people with degrees, there are complete incompetents with degrees, there’s brilliant people and incompetents without degrees.
edit: other example:
http://blog.rethinkdb.com/will-the-real-programmers-please-stand-up
Reversing a linked list is a good one unless the candidate knows how to. See, the issue is that educational institutions don’t teach how to think up a way to reverse linked list. Nor do they test for that. They might teach how to reverse the linked list, then they might test if the person can reverse the linked list. Some people learn to think of a way to solve such problems. Some don’t. It’s entirely incidental.