Objectively measured intelligence famously fits a bell curve.
As was pointed out to me on this website some time ago, this is not a scientific discovery but a definition. IQ scores fit on bell curves because they’re normalized to do so.
this is not a scientific discovery but a definition. IQ scores fit on bell curves because they’re normalized to do so.
Well, yes and no.
The IQ tests historically started as tests determining whether children are ready to attend elementary school, or whether they should wait another year.
In those first tests, the IQ of children was calculated by formula IQ = 100 × mental age ÷ physical age, where physical age was how old the child really is, and “mental age” was the age you would guess by what the child can do. For example if a child gets as many points in the test as an average 6-years old child would get, the mental age of that child would be 6. But if the physical age of that child is only 5, it gives us IQ = 100 × 6 ÷ 5 = 120. Note that the average child has IQ 100 by definition, and the whole point of multiplying by 100 was just to avoid using decimals.
Later, when psychologists tried to expand the definition of IQ to people of higher age, the old formula did not function. The number of points on IQ test was not a linear function of age, not even a monotonous one after some age. The concept of “mental age” is not well-defined for adult people. This is why the definition was changed. However, the new definition was designed to be backwards compatible with the old definition (i.e. the children tested by the old tests and new tests should get similar results).
The IQ values by the old definition did approximately fit the bell curve (but not exactly; there are more extremely stupid people than extremely smart people). So the new definition calculated data without using the concept of “mental age”, just by getting the distribution of IQ test points for given physical age, and normalizing it on the bell curve. -- So the new definition of IQ fits the bell curve by definition, but the old one fit the bell curve naturally.
By the way, this is the reason why we have multiple IQ scales today. Different scales use different values of sigma; I guess the value of 15 is the most common, but other numbers are used too. This is because different authors of IQ tests, all trying to normalize the new definition to the old one, had different sets of data measured by the old definition. So if someone’s data set of IQ values (measured by the “IQ = 100 × mental age ÷ physical age”) had sigma 15, they used sigma 15 to normalize in the new definition… but other people had data sets with sigma 16 or 20 or 10 (I am not sure about the exact numbers), so they normalized using that number.
The IQ values by the old definition did approximately fit the bell curve (but not exactly; there are more extremely stupid people than extremely smart people).
This part of the explanation needs the most followup. It’s often proposed that different subpopulations lie on different bell curves. These mixed normal distributions can be complicated. What did they look like in Germany in the 1910s, and were there really techniques available then for recognizing and analyzing them?
That you call this claim “hilarious” at best shows ignorance. IQ is closely correlated to nearly all of the things people associate with the word “intelligence”. A measure needs not be perfect to be useful.
As was pointed out to me on this website some time ago, this is not a scientific discovery but a definition. IQ scores fit on bell curves because they’re normalized to do so.
Well, yes and no.
The IQ tests historically started as tests determining whether children are ready to attend elementary school, or whether they should wait another year.
In those first tests, the IQ of children was calculated by formula IQ = 100 × mental age ÷ physical age, where physical age was how old the child really is, and “mental age” was the age you would guess by what the child can do. For example if a child gets as many points in the test as an average 6-years old child would get, the mental age of that child would be 6. But if the physical age of that child is only 5, it gives us IQ = 100 × 6 ÷ 5 = 120. Note that the average child has IQ 100 by definition, and the whole point of multiplying by 100 was just to avoid using decimals.
Later, when psychologists tried to expand the definition of IQ to people of higher age, the old formula did not function. The number of points on IQ test was not a linear function of age, not even a monotonous one after some age. The concept of “mental age” is not well-defined for adult people. This is why the definition was changed. However, the new definition was designed to be backwards compatible with the old definition (i.e. the children tested by the old tests and new tests should get similar results).
The IQ values by the old definition did approximately fit the bell curve (but not exactly; there are more extremely stupid people than extremely smart people). So the new definition calculated data without using the concept of “mental age”, just by getting the distribution of IQ test points for given physical age, and normalizing it on the bell curve. -- So the new definition of IQ fits the bell curve by definition, but the old one fit the bell curve naturally.
By the way, this is the reason why we have multiple IQ scales today. Different scales use different values of sigma; I guess the value of 15 is the most common, but other numbers are used too. This is because different authors of IQ tests, all trying to normalize the new definition to the old one, had different sets of data measured by the old definition. So if someone’s data set of IQ values (measured by the “IQ = 100 × mental age ÷ physical age”) had sigma 15, they used sigma 15 to normalize in the new definition… but other people had data sets with sigma 16 or 20 or 10 (I am not sure about the exact numbers), so they normalized using that number.
This part of the explanation needs the most followup. It’s often proposed that different subpopulations lie on different bell curves. These mixed normal distributions can be complicated. What did they look like in Germany in the 1910s, and were there really techniques available then for recognizing and analyzing them?
Furthermore, the claim that we have an objective measure of intelligence, and that this measure is the IQ test, is hilarious.
That you call this claim “hilarious” at best shows ignorance. IQ is closely correlated to nearly all of the things people associate with the word “intelligence”. A measure needs not be perfect to be useful.