The experiments described in table 3 are arranged in order of donor-hours of exposure (DHE; one donor in the experiment room for 1 hr equals one DHE; five donors in the room for 5 hr equals 25 DHE, etc.) because rate of RV16 transmission correlated in a nearly linear fashion with DHE (r = .926, P < .01). The correlation was nearly perfect when DHE was plotted logarithmically (r = .997, P .001; figure 3).
At the risk of being some 40 years late with my critique, that quotation has some problems.
First, if the relationship between two quantities A and B is linear in a semi-logarithmic plot, that generally indicates that the relationship between A and B is nonlinear, very boring edge cases aside! “I have one model which explains my data well and another which explains my data ‘nearly perfectly’” seems a bit of a strange message.
Also, arguing from models of infection, it is almost certainly wrong. The simplest toy model I can think of is “per DHE, every uninfected, susceptible person has a fixed probability of getting infected”, which would lead to an exponential decline of healthy individuals as DHE increase. But if that was the case, the curve should be bent linear if plotted with logY (when counting survivors), not logX. (I think the direction of curvature would be the same—highest infection risk per DHE at low DHEs, then a slow decrease.)
To explain why the distribution would look like observed would require a more complex model. For example: “Every uninfected person has a fixed personal exposure threshold when they become infected. It ranges from 45 DHE to 600 DHE and is exponentially distributed (favoring smaller thresholds) over the population in that range.”
If this was true, that would be highly surprising. Unlike with the killbots in Futurama, there is no good reason why your immune system should have a preset kill limit for RV16. If it is a case of “the immune system can fight of low doses but eventually gets overwhelmed”, then I am surprised that it would get equally overwhelmed by a low dose over a long period and a high dose over a short period.
I am also amazed that a study based on infecting people with cold viruses was run with so great a sample size that I can not spot the error bars with my bare eyes. Whom did they have on their IRB? Realistically, error bars are important. If you 100 percent are five out of five, that is very different from your 100% being 1000 of 1000.
Also, arguing from models of infection, it is almost certainly wrong. The simplest toy model I can think of is “per DHE, every uninfected, susceptible person has a fixed probability of getting infected”, which would lead to an exponential decline of healthy individuals as DHE increase. But if that was the case, the curve should be bent linear if plotted with logY (when counting survivors), not logX. (I think the direction of curvature would be the same—highest infection risk per DHE at low DHEs, then a slow decrease.)
To explain why the distribution would look like observed would require a more complex model. For example: “Every uninfected person has a fixed personal exposure threshold when they become infected. It ranges from 45 DHE to 600 DHE and is exponentially distributed (favoring smaller thresholds) over the population in that range.”
If this was true, that would be highly surprising. Unlike with the killbots in Futurama, there is no good reason why your immune system should have a preset kill limit for RV16. If it is a case of “the immune system can fight of low doses but eventually gets overwhelmed”, then I am surprised that it would get equally overwhelmed by a low dose over a long period and a high dose over a short period.
I agree that, considered from a mechanistic perspective, the obvious explanations for this data would be “surprising if true”. My guess for the actual model of infection is “did a sufficient quantity of the virus end up in contact with a non-protective surface like a mucuous membrane”, where “sufficient quantity” might vary by individual but for which “per DHE, every uninfected, susceptible person has a fixed probability of getting infected” is often a reasonable proxy (though it loses the details that might actually be relevant for more narrowly intervening on transmission). But I find it hard to be very confident, given the state of the available evidence.
At the risk of being some 40 years late with my critique, that quotation has some problems.
First, if the relationship between two quantities A and B is linear in a semi-logarithmic plot, that generally indicates that the relationship between A and B is nonlinear, very boring edge cases aside! “I have one model which explains my data well and another which explains my data ‘nearly perfectly’” seems a bit of a strange message.
Also, arguing from models of infection, it is almost certainly wrong. The simplest toy model I can think of is “per DHE, every uninfected, susceptible person has a fixed probability of getting infected”, which would lead to an exponential decline of healthy individuals as DHE increase. But if that was the case, the curve should be bent linear if plotted with logY (when counting survivors), not logX. (I think the direction of curvature would be the same—highest infection risk per DHE at low DHEs, then a slow decrease.)
To explain why the distribution would look like observed would require a more complex model. For example: “Every uninfected person has a fixed personal exposure threshold when they become infected. It ranges from 45 DHE to 600 DHE and is exponentially distributed (favoring smaller thresholds) over the population in that range.”
If this was true, that would be highly surprising. Unlike with the killbots in Futurama, there is no good reason why your immune system should have a preset kill limit for RV16. If it is a case of “the immune system can fight of low doses but eventually gets overwhelmed”, then I am surprised that it would get equally overwhelmed by a low dose over a long period and a high dose over a short period.
I am also amazed that a study based on infecting people with cold viruses was run with so great a sample size that I can not spot the error bars with my bare eyes. Whom did they have on their IRB?Realistically, error bars are important. If you 100 percent are five out of five, that is very different from your 100% being 1000 of 1000.I agree that, considered from a mechanistic perspective, the obvious explanations for this data would be “surprising if true”. My guess for the actual model of infection is “did a sufficient quantity of the virus end up in contact with a non-protective surface like a mucuous membrane”, where “sufficient quantity” might vary by individual but for which “per DHE, every uninfected, susceptible person has a fixed probability of getting infected” is often a reasonable proxy (though it loses the details that might actually be relevant for more narrowly intervening on transmission). But I find it hard to be very confident, given the state of the available evidence.