Standard deviations are used to characterize the spread around the mean of a normal distribution—it is not intended to characterize the tails. This is why discussion around it tends to focus on the 1-2 SDs, where the bulk of the data is, and rarely 3-4 SDs—it is rare to have the data (of sufficient size or low noise) to support meaningful interpretation of even 4 SDs with real-world data.
So in practice, using precise figures like 5, 7, or 20 SDs is misleading, because the tails aren’t usually sufficiently characterized (and it certainly isn’t with intelligence) -- all you can really say is that it’s beyond the validated range of the test. It’s like taking seriously a measurement of 151.887 when the instrument operates in integers up to 10 -- you’re implying you’re meaningfully operating on a level of precision and range that you don’t realistically have. It comes across as incredibly careless with regard to statistical nuance and rigor.
Care to explain how you think it’s being misused?
Standard deviations are used to characterize the spread around the mean of a normal distribution—it is not intended to characterize the tails. This is why discussion around it tends to focus on the 1-2 SDs, where the bulk of the data is, and rarely 3-4 SDs—it is rare to have the data (of sufficient size or low noise) to support meaningful interpretation of even 4 SDs with real-world data.
So in practice, using precise figures like 5, 7, or 20 SDs is misleading, because the tails aren’t usually sufficiently characterized (and it certainly isn’t with intelligence) -- all you can really say is that it’s beyond the validated range of the test. It’s like taking seriously a measurement of 151.887 when the instrument operates in integers up to 10 -- you’re implying you’re meaningfully operating on a level of precision and range that you don’t realistically have. It comes across as incredibly careless with regard to statistical nuance and rigor.