The problem is that you don’t understand the purpose of the studies at all and you’re violating several important principles which need to be kept in mind when applying logic to the real world.
Our primary goal is to determine net harm or benefit. If I do a study as to whether or not something causes harm or benefit, and see no change in underlying rates, then it is non-harmful. If it is making some people slightly more likely to get cancer, and others slightly less likely to get cancer, then there’s no net harm—there are just as many cancers as there were before. I may have changed the distribution of cancers in the population, but I have certainly not caused any net harm to the population.
This study’s purpose is to look at the net effect of the treatment. If we see the same amount of hyperactivity in the population prior to and after the study, then we cannot say that the dye causes hyperactivity in the general population.
“But,” you complain, “Clearly some people are being harmed!” Well yes, some people are worse off after the treatment in such a theoretical case. But here’s the key: for the effect NOT to show up in the general population, then you have only three major possibilities:
1) The people who are harmed are such a small portion of the population as to be statistically irrelevant.
2) There are just as many people who are benefitting from the treatment and as such NOT suffering from the metric in question, who would be otherwise, as there are people who would not be suffering from the metric without the treatment but are as a result of it. (this is extremely unlikely, as the magnitude of the effects would have to be extremely close to cancel out in this manner)
3) There is no effect.
If our purpose is to make [b]the best possible decision with the least possible amount of money spent[/b] (as it should always be), then a study on the net effect is the most efficient way of doing so. Testing every single possible SNP substitution is not possible, ergo, it is an irrational way to perform a study on the effects of anything. The only reason you would do such a study is if you had good reason to believe that a specific substitution had an effect either way.
Another major problem you run into when you try to run studies “your way” (more commonly known as “the wrong way”) is the blue M&M problem. You see, if you take even 10 things, and test them for an effect, you have a 40% chance of finding at least one false correlation. This means that in order to have a high degree of confidence in the results of your study, you must increase the threshold for detection—massively. Not only do you have to account for the fact that you’re testing more things, you also have to account for all the studies that don’t get published which would contradict your findings (publication bias—people are far more likely to report positive effects than non-effects).
In other words, you are not actually making a rational criticism of these studies. In fact, you can see exactly where you go wrong:
[quote]If 10% of kids become more hyperactive and 10% become less hyperactive after eating food coloring, such a methodology will never, ever detect it.[/quote]
While possible, how [b]likely[/b] is this? The answer is “Not very.” And given Occam’s Razor, we can mostly discard this barring evidence to the contrary. And no, moronic parents are not evidence to the contrary; you will find all sorts of idiots who claim that all sorts of things that don’t do anything do something. Anecdotes are not evidence.
This is a good example of someone trying to apply logic without actually trying to understand what the underlying problem is. Without understanding what is going on in the first place, you’re in real trouble.
I will note that your specific example is flawed in any case; the idea that these people are in fact being effected is deeply controvertial, and unfortunately a lot of it seems to involve the eternal crazy train (choo choo!) that somehow, magically, artifically produced things are more harmful than “naturally” produced things. Unfortunately this is largely based on the (obviously false and irrational) premise that things which are natural are somehow good for you, or things which are “artificial” are bad for you—something which has utterly failed to have been substantiated by and large. You should always automatically be deeply suspect of any such people, especially when you see “parents claim”.
The reason that the FDA says that food dyes are okay is because there is no evidence to the contrary. Food dye does not cause hyperactivity according to numerous studies, and in fact the studies that fail to show the effect are massively more convincing than those which do due to publication bias and the weakness of the studies which claim positive effects.
I know I am several years late to this party, but I felt it appropriate to hop in.
Einstein, with an IQ of 160+, really has an unmeasurably high IQ—at that IQ level, you are outside of the bounds of statistical probability used to construct the test. 160 correlates to such a rare IQ that you cannot use IQ tests to measure differences between them in a very predictive fashion because there aren’t enough people with IQs that high.
Now, what about the case of Feynman?
Well, there are tons of possibilities here:
1) It was a bad IQ test.
2) He got extraordinarily unlucky (Remember, your variability on a modern test is +-5 points; if the older tests had higher margins of error, it could be he was significantly more intelligent than this)
3) IQ is not a perfect indicator of g. This is actually known. It strongly correlates with g, but it is not identical to g—it is entirely possible that he was smarter than the IQ test indicates because of this discrepency.
4) He did badly on the test for some external reason (he was tired, the test didn’t get graded properly, he got the wrong person’s score back… any number of possibilities that could theoretically lower his IQ).
5) He really DID have an IQ of 125 in high school, but via concerted effort increased his intelligence greatly over time. In other words, he may have had significant untapped potential. Did he take the IQ test before or after he went on his crazy math-learning spree? This is especially true given he was still in adolescence.
6) He may really have only had an IQ of about 125, maybe as high as the mid 130s, and simply made better use of it than most people.