[quote]Correct. But neither can we say that the dye does not cause hyperactivity in anyone.[/quote]
No, but that is not our goal in the first place. Doing a test on every single possible trait is economically infeasible and unreasonable; ergo, net impact is our best metric.
The benefit is “we get a new food additive to use”.
The net cost is zero in terms of health impact (no more hyperactivity in the general population).
Ergo, the net benefit is a new food additive. This is very simple math here. Net benefit is what we care about in this case, as it is what we are studying. If it redistributes ailments amongst the population, then there may be even more optimal uses, but we’re still looking at a benefit.
If you want to delve deeper, that’s going to be a seperate experiment.
[quote]Your making the claim “no evidence to the contrary” shows that you have not read the literature, have not done a PubMed search on “ADHD, food dye”, and have no familiarity with toxicity studies in general. There is always evidence to the contrary. An evaluation weighs the evidence on both sides. You can take any case where the FDA has said “There is no evidence that X”, and look up the notes from the panel they held where they considered the evidence for X and decided that the evidence against X outweighed it.[/quote]
Your making the claim “evidence to the contrary” suggests that any of this is worth anything. The problem is that, unfortunately, it isn’t.
If someone does a study on 20 different colors of M&Ms, then they will, on average, find that one of the M&Ms will change someone’s cancer risk. The fact that their study showed that, with 95% confidence, blue M&Ms increased your odds of getting cancer, [b]is not evidence for the idea that blue M&M’s cause cancer[/b].
Worse, the odds of the negative finding studies being published is considerably less than the probability of the positive finding study being published. This is known as “publication bias”. Additionally, people are more likely to be biased against artificial additives than towards them, particularly “independent researchers” who very likely are researching it precisely because they harbor the belief that it does in fact have an effect.
This is very basic and is absolutely essential to understanding any sort of data of this sort. When I say that there is no evidence for it, I am saying precisely that—just because someone studied 20 colors of M&M’s and found that one has a 95% chance of causing more cancer tells me nothing. It isn’t evidence for anything. It is entirely possible that it DOES cause cancer, but the study has failed to provide me for evidence of that fact.
You are thinking in terms of formal logic, but that is not how science works. If you lack sufficient evidence to invalidate the null hypothesis, then you don’t have evidence. And the problem is that a mere study is often insufficient to actually demonstrate it unless the effects are extremely blatant.
quote The answer is, “This is very likely.” This is how studies turn out all the time, partly due to genetics. Different people have different genetics, different bacteria in their gut, different lifestyles, etc. This makes them metabolize food differently. It makes their brain chemistry different. Different people are different.[/quote]
For this to happen, you would require that the space to be very similar in size on both ends.
Is it possible for things to help one person and harm another? Absolutely.
Is it probable that something will help almost exactly as many people as it harms? No. Especially not some random genetic trait (there are genetic traits, such as sex, where this IS likely because it is an even split in the population, so you do have to be careful for that, but sex-dependence of results is pretty obvious).
The probability of equal distribution of the traits is vastly outweighed by the probability of it not being equally distributed. Ergo the result you are espousing is in fact extremely unlikely.
This is very basic and is absolutely essential to understanding any sort of data of this sort. When I say that there is no evidence for it, I am saying precisely that—just because someone studied 20 colors of M&M’s and found that one has a 95% chance of causing more cancer tells me nothing. It isn’t evidence for anything. It is entirely possible that it DOES cause cancer, but the study has failed to provide me for evidence of that fact.
When I said that “making the claim “no evidence to the contrary” shows that you have not read the literature, have not done a PubMed search on “ADHD, food dye”, and have no familiarity with toxicity studies in general,” I meant that literally. I’m well-aware of what 95% means and what publication bias means. If you had read the literature on ADHD and food dye, you would see that it is closer to a 50-50 split between studies concluding that there is or is not an effect on hyperactivity. You would know that some particular food dyes, e.g., tartrazine, are more controversial than others. You would also find that over the past 40 years, the list of food dyes claimed not to be toxic by the FDA and their European counterparts has been shrinking.
If you were familiar with toxicity studies in general, you would know that this is usually the case for any controversial substance. For instance, the FDA says there is “no evidence” that aspartame is toxic, and yet something like 75% of independent studies of aspartame concluded that it was toxic. The phrase “no evidence of toxicity”, when used by the FDA, is shorthand for something like “meta-analysis does not provide us with a single consistent toxicity narrative that conforms to our prior expectations”. You would also know that toxicity studies are frequently funded by the companies trying to sell the product being tested, and so publication bias works strongly against findings of toxicity.
[quote]Correct. But neither can we say that the dye does not cause hyperactivity in anyone.[/quote]
No, but that is not our goal in the first place. Doing a test on every single possible trait is economically infeasible and unreasonable; ergo, net impact is our best metric.
The benefit is “we get a new food additive to use”.
The net cost is zero in terms of health impact (no more hyperactivity in the general population).
Ergo, the net benefit is a new food additive. This is very simple math here. Net benefit is what we care about in this case, as it is what we are studying. If it redistributes ailments amongst the population, then there may be even more optimal uses, but we’re still looking at a benefit.
If you want to delve deeper, that’s going to be a seperate experiment.
[quote]Your making the claim “no evidence to the contrary” shows that you have not read the literature, have not done a PubMed search on “ADHD, food dye”, and have no familiarity with toxicity studies in general. There is always evidence to the contrary. An evaluation weighs the evidence on both sides. You can take any case where the FDA has said “There is no evidence that X”, and look up the notes from the panel they held where they considered the evidence for X and decided that the evidence against X outweighed it.[/quote]
Your making the claim “evidence to the contrary” suggests that any of this is worth anything. The problem is that, unfortunately, it isn’t.
If someone does a study on 20 different colors of M&Ms, then they will, on average, find that one of the M&Ms will change someone’s cancer risk. The fact that their study showed that, with 95% confidence, blue M&Ms increased your odds of getting cancer, [b]is not evidence for the idea that blue M&M’s cause cancer[/b].
Worse, the odds of the negative finding studies being published is considerably less than the probability of the positive finding study being published. This is known as “publication bias”. Additionally, people are more likely to be biased against artificial additives than towards them, particularly “independent researchers” who very likely are researching it precisely because they harbor the belief that it does in fact have an effect.
This is very basic and is absolutely essential to understanding any sort of data of this sort. When I say that there is no evidence for it, I am saying precisely that—just because someone studied 20 colors of M&M’s and found that one has a 95% chance of causing more cancer tells me nothing. It isn’t evidence for anything. It is entirely possible that it DOES cause cancer, but the study has failed to provide me for evidence of that fact.
You are thinking in terms of formal logic, but that is not how science works. If you lack sufficient evidence to invalidate the null hypothesis, then you don’t have evidence. And the problem is that a mere study is often insufficient to actually demonstrate it unless the effects are extremely blatant.
quote The answer is, “This is very likely.” This is how studies turn out all the time, partly due to genetics. Different people have different genetics, different bacteria in their gut, different lifestyles, etc. This makes them metabolize food differently. It makes their brain chemistry different. Different people are different.[/quote]
For this to happen, you would require that the space to be very similar in size on both ends.
Is it possible for things to help one person and harm another? Absolutely.
Is it probable that something will help almost exactly as many people as it harms? No. Especially not some random genetic trait (there are genetic traits, such as sex, where this IS likely because it is an even split in the population, so you do have to be careful for that, but sex-dependence of results is pretty obvious).
The probability of equal distribution of the traits is vastly outweighed by the probability of it not being equally distributed. Ergo the result you are espousing is in fact extremely unlikely.
This is very basic and is absolutely essential to understanding any sort of data of this sort. When I say that there is no evidence for it, I am saying precisely that—just because someone studied 20 colors of M&M’s and found that one has a 95% chance of causing more cancer tells me nothing. It isn’t evidence for anything. It is entirely possible that it DOES cause cancer, but the study has failed to provide me for evidence of that fact.
When I said that “making the claim “no evidence to the contrary” shows that you have not read the literature, have not done a PubMed search on “ADHD, food dye”, and have no familiarity with toxicity studies in general,” I meant that literally. I’m well-aware of what 95% means and what publication bias means. If you had read the literature on ADHD and food dye, you would see that it is closer to a 50-50 split between studies concluding that there is or is not an effect on hyperactivity. You would know that some particular food dyes, e.g., tartrazine, are more controversial than others. You would also find that over the past 40 years, the list of food dyes claimed not to be toxic by the FDA and their European counterparts has been shrinking.
If you were familiar with toxicity studies in general, you would know that this is usually the case for any controversial substance. For instance, the FDA says there is “no evidence” that aspartame is toxic, and yet something like 75% of independent studies of aspartame concluded that it was toxic. The phrase “no evidence of toxicity”, when used by the FDA, is shorthand for something like “meta-analysis does not provide us with a single consistent toxicity narrative that conforms to our prior expectations”. You would also know that toxicity studies are frequently funded by the companies trying to sell the product being tested, and so publication bias works strongly against findings of toxicity.